WorldWideScience

Sample records for asympotical efficient estimates

  1. Asympotic efficiency of signed - rank symmetry tests under skew alternatives.

    OpenAIRE

    Alessandra Durio; Yakov Nikitin

    2002-01-01

    The efficiency of some known tests for symmetry such as the sign test, the Wilcoxon signed-rank test or more general linear signed rank tests was studied mainly under the classical alternatives of location. However it is interesting to compare the efficiencies of these tests under asymmetric alternatives like the so-called skew alternative proposed in Azzalini (1985). We find and compare local Bahadur efficiencies of linear signed-rank statistics for skew alternatives and discuss also the con...

  2. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  3. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  4. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  5. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  6. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  7. MILITARY MISSION COMBAT EFFICIENCY ESTIMATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Ighoyota B. AJENAGHUGHRURE

    2017-04-01

    Full Text Available Military infantry recruits, although trained, lacks experience in real-time combat operations, despite the combat simulations training. Therefore, the choice of including them in military operations is a thorough and careful process. This has left top military commanders with the tough task of deciding, the best blend of inexperienced and experienced infantry soldiers, for any military operation, based on available information on enemy strength and capability. This research project delves into the design of a mission combat efficiency estimator (MCEE. It is a decision support system that aids top military commanders in estimating the best combination of soldiers suitable for different military operations, based on available information on enemy’s combat experience. Hence, its advantages consist of reducing casualties and other risks that compromises the entire operation overall success, and also boosting the morals of soldiers in an operation, with such information as an estimation of combat efficiency of their enemies. The system was developed using Microsoft Asp.Net and Sql server backend. A case study test conducted with the MECEE system, reveals clearly that the MECEE system is an efficient tool for military mission planning in terms of team selection. Hence, when the MECEE system is fully deployed it will aid military commanders in the task of decision making on team members’ combination for any given operation based on enemy personnel information that is well known beforehand. Further work on the MECEE will be undertaken to explore fire power types and impact in mission combat efficiency estimation.

  8. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  9. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  10. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  11. Efficient Frontier - Comparing Different Volatility Estimators

    OpenAIRE

    Tea Poklepović; Zdravka Aljinović; Mario Matković

    2015-01-01

    Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that ra...

  12. Efficient multidimensional regularization for Volterra series estimation

    Science.gov (United States)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  13. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  14. Global CO2 efficiency: Country-wise estimates using a stochastic cost frontier

    International Nuclear Information System (INIS)

    Herrala, Risto; Goel, Rajeev K.

    2012-01-01

    This paper examines global carbon dioxide (CO 2 ) efficiency by employing a stochastic cost frontier analysis of about 170 countries in 1997 and 2007. The main contribution lies in providing a new approach to environmental efficiency estimation, in which the efficiency estimates quantify the distance from the policy objective of minimum emissions. We are able to examine a very large pool of nations and provide country-wise efficiency estimates. We estimate three econometric models, corresponding with alternative interpretations of the Cancun vision (Conference of the Parties 2011). The models reveal progress in global environmental efficiency during a preceding decade. The estimates indicate vast differences in efficiency levels, and efficiency changes across countries. The highest efficiency levels are observed in Africa and Europe, while the lowest are clustered around China. The largest efficiency gains were observed in central and eastern Europe. CO 2 efficiency also improved in the US and China, the two largest emitters, but their ranking in terms of CO 2 efficiency deteriorated. Policy implications are discussed. - Highlights: ► We estimate global environmental efficiency in line with the Cancun vision, using a stochastic cost frontier. ► The study covers 170 countries during a 10 year period, ending in 2007. ► The biggest improvements occurred in Europe, and efficiency falls in South America. ► The efficiency ranking of US and China, the largest emitters, deteriorated. ► In 2007, highest efficiency was observed in Africa and Europe, and the lowest around China.

  15. An efficient estimator for Gibbs random fields

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin

    2014-01-01

    Roč. 50, č. 6 (2014), s. 883-895 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf

  16. PFP total operating efficiency calculation and basis of estimate

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The purpose of the Plutonium Finishing Plant (PFP) Total Operating Efficiency Calculation and Basis of Estimate document is to provide the calculated value and basis of estimate for the Total Operating Efficiency (TOE) for the material stabilization operations to be conducted in 234-52 Building. This information will be used to support both the planning and execution of the Plutonium Finishing Plant (PFP) Stabilization and Deactivation Project's (hereafter called the Project) resource-loaded, integrated schedule

  17. Piecewise Loglinear Estimation of Efficient Production Surfaces

    OpenAIRE

    Rajiv D. Banker; Ajay Maindiratta

    1986-01-01

    Linear programming formulations for piecewise loglinear estimation of efficient production surfaces are derived from a set of basic properties postulated for the underlying production possibility sets. Unlike the piecewise linear model of Banker, Charnes, and Cooper (Banker R. D., A. Charnes, W. W. Cooper. 1984. Models for the estimation of technical and scale inefficiencies in data envelopment analysis. Management Sci. 30 (September) 1078--1092.), this approach permits the identification of ...

  18. Estimating Production Technical Efficiency of Irvingia Seed (Ogbono ...

    African Journals Online (AJOL)

    This study estimated the production technical efficiency of irvingia seed (Ogbono) farmers in Nsukka agricultural zone in Enugu State, Nigeria. This is against the backdrop of the importance of efficiency as a factor of productivity in a growing economy like Nigeria where resources are scarce and opportunities for new ...

  19. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  20. Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism

    Directory of Open Access Journals (Sweden)

    Custer Adrian V

    2005-08-01

    Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to

  1. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Science.gov (United States)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  2. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  3. Estimation of farm level technical efficiency and its determinants ...

    African Journals Online (AJOL)

    With the difficulties encountered by the farmers in adopting improved technologies, increasing resource use efficiency has become a very significant factor in increasing productivity. Therefore, this study was designed to estimate the farm level technical efficiency and its determinants among male and female sweet potato ...

  4. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  5. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  6. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  7. AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN

    Directory of Open Access Journals (Sweden)

    Nabeel Hussain

    2014-04-01

    Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.

  8. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  9. An organic group contribution approach to radiative efficiency estimation of organic working fluid

    International Nuclear Information System (INIS)

    Zhang, Xinxin; Kobayashi, Noriyuki; He, Maogang; Wang, Jingfu

    2016-01-01

    Highlights: • We use group contribution method to estimate radiative efficiency. • CFC, HCFC, HFC, HFE, and PFC were estimated using this method. • In most cases, the estimation value has a good precision. • The method is reliable for the estimation of molecule with a symmetric structure. • This estimation method can offer good reference for working fluid development. - Abstract: The ratification of the Montreal Protocol in 1987 and the Kyoto Protocol in 1997 mark an environment protection era of the development of organic working fluid. Ozone depletion potential (ODP) and global warming potential (GWP) are two most important indices for the quantitative comparison of organic working fluid. Nowadays, more and more attention has been paid to GWP. The calculation of GWP is an extremely complicated process which involves interactions between surface and atmosphere such as atmospheric radiative transfer and atmospheric chemical reactions. GWP of a substance is related to its atmospheric abundance and is a variable in itself. However, radiative efficiency is an intermediate parameter for GWP calculation and it is a constant value used to describe inherent property of a substance. In this paper, the group contribution method was adopted to estimate the radiative efficiency of the organic substance which contains more than one carbon atom. In most cases, the estimation value and the standard value are in a good agreement. The biggest estimation error occurs in the estimation of the radiative efficiency of fluorinated ethers due to its plenty of structure groups and its complicated structure compared with hydrocarbon. This estimation method can be used to predict the radiative efficiency of newly developed organic working fluids.

  10. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Wang, Suojin; Zhang, Xiangliang

    2016-01-01

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  11. KDE-Track: An Efficient Dynamic Density Estimator for Data Streams

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-11-08

    Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.

  12. Phytoremediation: realistic estimation of modern efficiency and future possibility

    International Nuclear Information System (INIS)

    Kravets, A.; Pavlenko, Y.; Kusmenko, L.; Ermak, M.

    1996-01-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  13. Phytoremediation: realistic estimation of modern efficiency and future possibility

    Energy Technology Data Exchange (ETDEWEB)

    Kravets, A; Pavlenko, Y [Institute of Cell Biology and Genetic Engineering NAS, Kiev (Ukraine); Kusmenko, L; Ermak, M [Institute of Plant Physiology and Genetic NAS, Vasilkovsky, Kiev (Ukraine)

    1996-11-01

    Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technologyhas been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)

  14. Engineering estimates versus impact evaluation of energy efficiency projects: Regression discontinuity evidence from a case study

    International Nuclear Information System (INIS)

    Lang, Corey; Siler, Matthew

    2013-01-01

    Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load

  15. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  16. Efficient estimation for high similarities using odd sketches

    DEFF Research Database (Denmark)

    Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang

    2014-01-01

    . This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... and web duplicate detection tasks....

  17. Operator Bias in the Estimation of Arc Efficiency in Gas Tungsten Arc Welding

    Directory of Open Access Journals (Sweden)

    Fredrik Sikström

    2015-03-01

    Full Text Available In this paper the operator bias in the measurement process of arc efficiency in stationary direct current electrode negative gas tungsten arc welding is discussed. An experimental study involving 15 operators (enough to reach statistical significance has been carried out with the purpose to estimate the arc efficiency from a specific procedure for calorimetric experiments. The measurement procedure consists of three manual operations which introduces operator bias in the measurement process. An additional relevant experiment highlights the consequences of estimating the arc voltage by measuring the potential between the terminals of the welding power source instead of measuring the potential between the electrode contact tube and the workpiece. The result of the study is a statistical evaluation of the operator bias influence on the estimate, showing that operator bias is negligible in the estimate considered here. On the contrary the consequences of neglecting welding leads voltage drop results in a significant under estimation of the arc efficiency.

  18. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  19. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2014-01-01

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  20. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  1. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  2. The efficiency of different estimation methods of hydro-physical limits

    Directory of Open Access Journals (Sweden)

    Emma María Martínez

    2012-12-01

    Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

  3. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau; Chatterjee, Nilanjan; Carroll, Raymond J.

    2009-01-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  4. Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies

    KAUST Repository

    Chen, Yi-Hau

    2009-03-01

    Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.

  5. Efficient estimation of diffusion during dendritic solidification

    Science.gov (United States)

    Yeum, K. S.; Poirier, D. R.; Laxmanan, V.

    1989-01-01

    A very efficient finite difference method has been developed to estimate the solute redistribution during solidification with diffusion in the solid. This method is validated by comparing the computed results with the results of an analytical solution derived by Kobayashi (1988) for the assumptions of a constant diffusion coefficient, a constant equilibrium partition ratio, and a parabolic rate of the advancement of the solid/liquid interface. The flexibility of the method is demonstrated by applying it to the dendritic solidification of a Pb-15 wt pct Sn alloy, for which the equilibrium partition ratio and diffusion coefficient vary substantially during solidification. The fraction eutectic at the end of solidification is also obtained by estimating the fraction solid, in greater resolution, where the concentration of solute in the interdendritic liquid reaches the eutectic composition of the alloy.

  6. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  7. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  8. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  9. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  10. Environmental efficiency with multiple environmentally detrimental variables : estimated with SFA and DEA

    NARCIS (Netherlands)

    Reinhard, S.; Lovell, C.A.K.; Thijssen, G.J.

    2000-01-01

    The objective of this paper is to estimate comprehensive environmental efficiency measures for Dutch dairy farms. The environmental efficiency scores are based on the nitrogen surplus, phosphate surplus and the total (direct and indirect) energy use of an unbalanced panel of dairy farms. We define

  11. LocExpress: a web server for efficiently estimating expression of novel transcripts.

    Science.gov (United States)

    Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge

    2016-12-22

    The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .

  12. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  13. Public-Private Investment Partnerships: Efficiency Estimation Methods

    Directory of Open Access Journals (Sweden)

    Aleksandr Valeryevich Trynov

    2016-06-01

    Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in

  14. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  15. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  16. The role of efficiency estimates in regulatory price reviews: Ofgem's approach to benchmarking electricity networks

    International Nuclear Information System (INIS)

    Pollitt, Michael

    2005-01-01

    Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. (author)

  17. A harmonized calculation model for transforming EU bottom-up energy efficiency indicators into empirical estimates of policy impacts

    International Nuclear Information System (INIS)

    Horowitz, Marvin J.; Bertoldi, Paolo

    2015-01-01

    This study is an impact analysis of European Union (EU) energy efficiency policy that employs both top-down energy consumption data and bottom-up energy efficiency statistics or indicators. As such, it may be considered a contribution to the effort called for in the EU's 2006 Energy Services Directive (ESD) to develop a harmonized calculation model. Although this study does not estimate the realized savings from individual policy measures, it does provide estimates of realized energy savings for energy efficiency policy measures in aggregate. Using fixed effects panel models, the annual cumulative savings in 2011 of combined household and manufacturing sector electricity and natural gas usage attributed to EU energy efficiency policies since 2000 is estimated to be 1136 PJ; the savings attributed to energy efficiency policies since 2006 is estimated to be 807 PJ, or the equivalent of 5.6% of 2011 EU energy consumption. As well as its contribution to energy efficiency policy analysis, this study adds to the development of methods that can improve the quality of information provided by standardized energy efficiency and sustainable resource indexes. - Highlights: • Impact analysis of European Union energy efficiency policy. • Harmonization of top-down energy consumption and bottom-up energy efficiency indicators. • Fixed effects models for Member States for household and manufacturing sectors and combined electricity and natural gas usage. • EU energy efficiency policies since 2000 are estimated to have saved 1136 Petajoules. • Energy savings attributed to energy efficiency policies since 2006 are 5.6 percent of 2011 combined electricity and natural gas usage.

  18. ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS

    Directory of Open Access Journals (Sweden)

    Олег Васильевич Чабанюк

    2014-05-01

    Full Text Available In this  article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50

  19. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  20. Commercial Discount Rate Estimation for Efficiency Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-04-13

    Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).

  1. RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION

    Directory of Open Access Journals (Sweden)

    Archana V

    2011-04-01

    Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator

  2. Efficient AM Algorithms for Stochastic ML Estimation of DOA

    Directory of Open Access Journals (Sweden)

    Haihua Chen

    2016-01-01

    Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.

  3. A possible approach to estimating the operational efficiency of multiprocessor systems

    International Nuclear Information System (INIS)

    Kuznetsov, N.Y.; Gorlach, S.P.; Sumskaya, A.A.

    1984-01-01

    This article presents a mathematical model that constructs the upper and lower estimates evaluating the efficiency of solution of a large class of problems using a multiprocessor system with a specific architecture. Efficiency depends on a system's architecture (e.g., the number of processors, memory volume, the number of communication links, commutation speed) and the types of problems it is intended to solve. The behavior of the model is considered in a stationary mode. The model is used to evaluate the efficiency of a particular algorithm implemented in a multiprocessor system. It is concluded that the model is flexible and enables the investigation of a broad class of problems in computational mathematics, including linear algebra and boundary-value problems of mathematical physics

  4. Asympotics with positive cosmological constant

    Science.gov (United States)

    Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna

    2014-03-01

    Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.

  5. A virtually blind spectrum efficient channel estimation technique for mimo-ofdm system

    International Nuclear Information System (INIS)

    Ullah, M.O.

    2015-01-01

    Multiple-Input Multiple-Output antennas in conjunction with Orthogonal Frequency-Division Multiplexing is a dominant air interface for 4G and 5G cellular communication systems. Additionally, MIMO- OFDM based air interface is the foundation for latest wireless Local Area Networks, wireless Personal Area Networks, and digital multimedia broadcasting. Whether it is a single antenna or a multi-antenna OFDM system, accurate channel estimation is required for coherent reception. Training-based channel estimation methods require multiple pilot symbols and therefore waste a significant portion of channel bandwidth. This paper describes a virtually blind spectrum efficient channel estimation scheme for MIMO-OFDM systems which operates well below the Nyquist criterion. (author)

  6. Estimation of energy efficiency of residential buildings

    Directory of Open Access Journals (Sweden)

    Glushkov Sergey

    2017-01-01

    Full Text Available Increasing energy performance of the residential buildings by means of reducing heat consumption on the heating and ventilation is the last segment in the system of energy resources saving. The first segments in the energy saving process are heat producing and transportation over the main lines and outside distribution networks. In the period from 2006 to 2013. by means of the heat-supply schemes optimization and modernization of the heating systems. using expensive (200–300 $US per 1 m though hugely effective preliminary coated pipes. the economy reached 2.7 mln tons of fuel equivalent. Considering the multi-stage and multifactorial nature (electricity. heat and water supply of the residential sector energy saving. the reasonable estimate of the efficiency of the saving of residential buildings energy should be performed in tons of fuel equivalent per unit of time.

  7. Energy efficiency estimation of a steam powered LNG tanker using normal operating data

    Directory of Open Access Journals (Sweden)

    Sinha Rajendra Prasad

    2016-01-01

    Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is

  8. Estimating shadow prices and efficiency analysis of productive inputs and pesticide use of vegetable production

    NARCIS (Netherlands)

    Singbo, Alphonse G.; Lansink, Alfons Oude; Emvalomatis, Grigorios

    2015-01-01

    This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate

  9. On the number of spanning trees in random regular graphs

    DEFF Research Database (Denmark)

    Greenhill, Catherine; Kwan, Matthew; Wind, David Kofoed

    2014-01-01

    Let d >= 3 be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random d-regular graph with n vertices. (The asymptotics are as n -> infinity, restricted to even n if d is odd.) We also obtain the asymptotic distribution of the number of spanning...

  10. Efficient optimal joint channel estimation and data detection for massive MIMO systems

    KAUST Repository

    Alshamary, Haider Ali Jasim

    2016-08-15

    In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO systems, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.

  11. Program Potential: Estimates of Federal Energy Cost Savings from Energy Efficient Procurement

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-09-17

    In 2011, energy used by federal buildings cost approximately $7 billion. Reducing federal energy use could help address several important national policy goals, including: (1) increased energy security; (2) lowered emissions of greenhouse gases and other air pollutants; (3) increased return on taxpayer dollars; and (4) increased private sector innovation in energy efficient technologies. This report estimates the impact of efficient product procurement on reducing the amount of wasted energy (and, therefore, wasted money) associated with federal buildings, as well as on reducing the needless greenhouse gas emissions associated with these buildings.

  12. Efficient estimation of dynamic density functions with an application to outlier detection

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali; Zhang, Xiangliang; Wang, Suojin

    2012-01-01

    In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.

  13. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  14. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.

    Science.gov (United States)

    Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro

    2015-08-21

    Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  15. Efficient Implementation of a Symbol Timing Estimator for Broadband PLC

    Directory of Open Access Journals (Sweden)

    Francisco Nombela

    2015-08-01

    Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.

  16. Efficiency Optimization Control of IPM Synchronous Motor Drives with Online Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Sadegh Vaez-Zadeh

    2011-04-01

    Full Text Available This paper describes an efficiency optimization control method for high performance interior permanent magnet synchronous motor drives with online estimation of motor parameters. The control system is based on an input-output feedback linearization method which provides high performance control and simultaneously ensures the minimization of the motor losses. The controllable electrical loss can be minimized by the optimal control of the armature current vector. It is shown that parameter variations except at near the nominal conditions have undesirable effect on the controller performance. Therefore, a parameter estimation method based on the second method of Lyapunov is presented which guarantees the stability and convergence of the estimation. The extensive simulation results show the feasibility of the proposed controller and observer and their desirable performances.

  17. Trapped surfaces due to concentration of gravitational radiation

    International Nuclear Information System (INIS)

    Beig, R.; O Murchadha, N.

    1991-01-01

    Sequences of global, asympotically flat solutions to the time-symmetric initial value constraints of general relativity in vacuo are constructed which develop outer trapped surfaces for large values of the argument. Thus all such configurations must gravitationally collapse. A new proof of the positivity of mass in the strong-field regime is also found. (Authors) 22 refs

  18. A vanishing diffusion limit in a nonstandard system of phase field equations

    Czech Academy of Sciences Publication Activity Database

    Colli, P.; Gilardi, G.; Krejčí, Pavel; Sprekels, J.

    2014-01-01

    Roč. 3, č. 2 (2014), s. 257-275 ISSN 2163-2480 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : nonstandard phase field system * nonlinear partial differential equations * asympotic limit Subject RIV: BA - General Mathematics Impact factor: 0.373, year: 2014 http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=9918

  19. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  20. Estimating returns to scale and scale efficiency for energy consuming appliances

    Energy Technology Data Exchange (ETDEWEB)

    Blum, Helcio [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group; Okwelum, Edson O. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group

    2018-01-18

    Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold. We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.

  1. Estimation of absorbed photosynthetically active radiation and vegetation net production efficiency using satellite data

    International Nuclear Information System (INIS)

    Hanan, N.P.; Prince, S.D.; Begue, A.

    1995-01-01

    The amount of photosynthetically active radiation (PAR) absorbed by green vegetation is an important determinant of photosynthesis and growth. Methods for the estimation of fractional absorption of PAR (iff PAR ) for areas greater than 1 km 2 using satellite data are discussed, and are applied to sites in the Sahel that have a sparse herb layer and tree cover of less than 5%. Using harvest measurements of seasonal net production, net production efficiencies are calculated. Variation in estimates of seasonal PAR absorption (APAR) caused by the atmospheric correction method and relationship between surface reflectances and iff PAR is considered. The use of maximum value composites of satellite NDVI to reduce the effect of the atmosphere is shown to produce inaccurate APAR estimates. In this data set, however, atmospheric correction using average optical depths was found to give good approximations of the fully corrected data. A simulation of canopy radiative transfer using the SAIL model was used to derive a relationship between canopy NDVI and iff PAR . Seasonal APAR estimates assuming a 1:1 relationship between iff PAR and NDVI overestimated the SAIL modeled results by up to 260%. The use of a modified 1:1 relationship, where iff PAR was assumed to be linearly related to NDVI scaled between minimum (soil) and maximum (infinite canopy) values, underestimated the SAIL modeled results by up to 35%. Estimated net production efficiencies (ϵ n , dry matter per unit APAR) fell in the range 0.12–1.61 g MJ −1 for above ground production, and in the range 0.16–1.88 g MJ −1 for total production. Sites with lower rainfall had reduced efficiencies, probably caused by physiological constraints on photosynthesis during dry conditions. (author)

  2. Statistical estimate of mercury removal efficiencies for air pollution control devices of municipal solid waste incinerators.

    Science.gov (United States)

    Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki

    2010-10-15

    Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. Copyright © 2010 Elsevier B.V. All rights

  3. Efficient Spectral Power Estimation on an Arbitrary Frequency Scale

    Directory of Open Access Journals (Sweden)

    F. Zaplata

    2015-04-01

    Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.

  4. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond; Maity, Arnab; Mammen, Enno; Yu, Kyusang

    2009-01-01

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  5. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond

    2009-04-23

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  6. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)

    2009-02-23

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.

  7. Analytical estimates and proof of the scale-free character of efficiency and improvement in Barabasi-Albert trees

    International Nuclear Information System (INIS)

    Hernandez-Bermejo, B.; Marco-Blanco, J.; Romance, M.

    2009-01-01

    Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow

  8. Impact of energy policy instruments on the estimated level of underlying energy efficiency in the EU residential sector

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.; Zorić, Jelena

    2014-01-01

    The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential sector in the EU-27 member states for the period 1996 to 2009. The estimates for the energy efficiency confirm that the EU residential sector indeed holds a relatively high potential for energy savings from reduced inefficiency. Therefore, despite the common objective to decrease ‘wasteful’ energy consumption, considerable variation in energy efficiency between the EU member states is established. Furthermore, an attempt is made to evaluate the impact of energy-efficiency measures undertaken in the EU residential sector by introducing an additional set of variables into the model and the results suggest that financial incentives and energy performance standards play an important role in promoting energy efficiency improvements, whereas informative measures do not have a significant impact. - Highlights: • The level of energy efficiency of the EU residential sector is estimated. • Considerable potential for energy savings from reduced inefficiency is established. • The impact of introduced energy-efficiency policy measures is also evaluated. • Financial incentives are found to promote energy efficiency improvements. • Energy performance standards also play an important role

  9. Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation

    KAUST Repository

    Amin, Osama

    2015-06-08

    Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.

  10. Towards physiologically meaningful water-use efficiency estimates from eddy covariance data.

    Science.gov (United States)

    Knauer, Jürgen; Zaehle, Sönke; Medlyn, Belinda E; Reichstein, Markus; Williams, Christopher A; Migliavacca, Mirco; De Kauwe, Martin G; Werner, Christiane; Keitel, Claudia; Kolari, Pasi; Limousin, Jean-Marc; Linderson, Maj-Lena

    2018-02-01

    Intrinsic water-use efficiency (iWUE) characterizes the physiological control on the simultaneous exchange of water and carbon dioxide in terrestrial ecosystems. Knowledge of iWUE is commonly gained from leaf-level gas exchange measurements, which are inevitably restricted in their spatial and temporal coverage. Flux measurements based on the eddy covariance (EC) technique can overcome these limitations, as they provide continuous and long-term records of carbon and water fluxes at the ecosystem scale. However, vegetation gas exchange parameters derived from EC data are subject to scale-dependent and method-specific uncertainties that compromise their ecophysiological interpretation as well as their comparability among ecosystems and across spatial scales. Here, we use estimates of canopy conductance and gross primary productivity (GPP) derived from EC data to calculate a measure of iWUE (G 1 , "stomatal slope") at the ecosystem level at six sites comprising tropical, Mediterranean, temperate, and boreal forests. We assess the following six mechanisms potentially causing discrepancies between leaf and ecosystem-level estimates of G 1 : (i) non-transpirational water fluxes; (ii) aerodynamic conductance; (iii) meteorological deviations between measurement height and canopy surface; (iv) energy balance non-closure; (v) uncertainties in net ecosystem exchange partitioning; and (vi) physiological within-canopy gradients. Our results demonstrate that an unclosed energy balance caused the largest uncertainties, in particular if it was associated with erroneous latent heat flux estimates. The effect of aerodynamic conductance on G 1 was sufficiently captured with a simple representation. G 1 was found to be less sensitive to meteorological deviations between canopy surface and measurement height and, given that data are appropriately filtered, to non-transpirational water fluxes. Uncertainties in the derived GPP and physiological within-canopy gradients and their

  11. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

  12. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  13. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  14. Efficiency of the estimators of multivariate distribution parameters from the one-dimensional observed frequencies

    International Nuclear Information System (INIS)

    Chernov, N.I.; Kurbatov, V.S.; Ososkov, G.A.

    1988-01-01

    Parameter estimation for multivariate probability distributions is studied in experiments where data are presented as one-dimensional hystograms. For this model a statistics defined as a quadratic form of the observed frequencies which has a limitig x 2 -distribution is proposed. The efficiency of the estimator minimizing the value of that statistics is proved whithin the class of all unibased estimates obtained via minimization of quadratic forms of observed frequencies. The elaborated method was applied to the physical problem of analysis of the secondary pion energy distribution in the isobar model of pion-nucleon interactions with the production of an additional pion. The numerical experiments showed that the accuracy of estimation is twice as much if comparing the conventional methods

  15. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    Science.gov (United States)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  16. Experimental study on source efficiencies for estimating surface contamination level

    International Nuclear Information System (INIS)

    Ichiji, Takeshi; Ogino, Haruyuki

    2008-01-01

    Source efficiency was measured experimentally for various materials, such as metals, nonmetals, flooring materials, sheet materials and other materials, contaminated by alpha and beta emitter radioactive nuclides. Five nuclides, 147 Pm, 60 Co, 137 Cs, 204 Tl and 90 Sr- 90 Y, were used as the beta emitters, and one nuclide 241 Am was used as the alpha emitter. The test samples were prepared by placing drops of the radioactive standardized solutions uniformly on the various materials using an automatic quantitative dispenser system from Musashi Engineering, Inc. After placing drops of the radioactive standardized solutions, the test materials were allowed to dry for more than 12 hours in a draft chamber with a hood. The radioactivity of each test material was about 30 Bq. Beta rays or alpha rays from the test materials were measured with a 2-pi gas flow proportional counter from Aloka Co., Ltd. The source efficiencies of the metals, nonmetals and sheet materials were higher than 0.5 in the case of contamination by the 137 Cs, 204 Tl and 90 Sr- 90 Y radioactive standardized solutions, higher than 0.4 in the case of contamination by the 60 Co radioactive standardized solution, and higher than 0.25 in the case of contamination by the alpha emitter the 241 Am radioactive standardized solution. These values were higher than those given in Japanese Industrial Standards (JIS) documents. In contrast, the source efficiencies of some permeable materials were lower than those given in JIS documents, because source efficiency varies depending on whether the materials or radioactive sources are wet or dry. This study provides basic data on source efficiency, which is useful for estimating the surface contamination level of materials. (author)

  17. A novel method for coil efficiency estimation: Validation with a 13C birdcage

    DEFF Research Database (Denmark)

    Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina

    2012-01-01

    Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...

  18. Estimating the Efficiency and Impacts of Petroleum Product Pricing Reforms in China

    Directory of Open Access Journals (Sweden)

    Chuxiong Deng

    2018-04-01

    Full Text Available The efficiency and effects analysis of a new pricing mechanism would have significant policy implications for the further design of a pricing mechanism in an emerging market. Unlike most of the existing literature, which focuses on the impacts to the macro-economy, this paper firstly uses an econometrics model to discuss the efficiency of the new pricing mechanism, and then establishes an augmented Phillips curve to estimate the impact of pricing reform on inflation in China. The results show that: (1 the new pricing mechanism would strengthen the linkage between Chinese oil prices and international oil prices; (2 oil price adjustments are still inadequate in China. (3 The lag in inflation is the most important factor that affects inflation, while the impact of the Chinese government’s price adjustments on inflation is limited and insignificant. In order to improve the efficiency of the petroleum products pricing mechanism and shorten lags, government should shorten the adjustment period and diminish the fluctuation threshold.

  19. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  20. SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI

    Energy Technology Data Exchange (ETDEWEB)

    Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.

  1. Estimation of Maize photosynthesis Efficiency Under Deficit Irrigation and Mulch

    International Nuclear Information System (INIS)

    Al-Hadithi, S.

    2004-01-01

    This research aims at estimating maize photosynthesis efficiency under deficit irrigation and soil mulching. A split-split plot design experiment was conducted with three replicates during the fall season 2000 and spring season 2001 at the experimental Station of Soil Dept./ Iraq Atomic Energy Commission. The main plots were assigned to full and deficit irrigation treatments: (C) control. The deficit irrigation treatment included the omission of one irrigation at establishment (S1, 15 days), vegetation (S2, 35 days), flowering (S3, 40 days) and yield formation (S4, 30 days) stages. The sub-plots were allocated for the two varieties, Synthetic 5012 (V1) and Haybrid 2052 (V2). The sub-sub-plots were assigned to mulch (M1) with wheat straw and no mulch (M0). Results showed that the deficit irrigation did not affect photosynthesis efficiency in both seasons, which ranged between 1.90 to 2.15% in fall season and between 1.18 and 1.45% in spring season. The hybrid variety was superior 9.39 and 9.15% over synthetic variety in fall and spring seasons, respectively. Deficit irrigation, varieties and mulch had no significant effects on harvest index in both seasons. This indicates that the two varieties were stable in their partitioning efficiency of nutrient matter between plant organ and grains under the condition of this experiment. (Author) 21 refs., 3 figs., 6 tabs

  2. Fourier Spot Volatility Estimator: Asymptotic Normality and Efficiency with Liquid and Illiquid High-Frequency Data

    Science.gov (United States)

    2015-01-01

    The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617

  3. SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop

    Science.gov (United States)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2013-04-01

    The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web

  4. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  5. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    Science.gov (United States)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  6. Estimating the cost of saving electricity through U.S. utility customer-funded energy efficiency programs

    International Nuclear Information System (INIS)

    Hoffman, Ian M.; Goldman, Charles A.; Rybka, Gregory; Leventis, Greg; Schwartz, Lisa; Sanstad, Alan H.; Schiller, Steven

    2017-01-01

    The program administrator and total cost of saved energy allow comparison of the cost of efficiency across utilities, states, and program types, and can identify potential performance improvements. Comparing program administrator cost with the total cost of saved energy can indicate the degree to which programs leverage investment by participants. Based on reported total costs and savings information for U.S. utility efficiency programs from 2009 to 2013, we estimate the savings-weighted average total cost of saved electricity across 20 states at $0.046 per kilowatt-hour (kW h), comparing favorably with energy supply costs and retail rates. Programs targeted on the residential market averaged $0.030 per kW h compared to $0.053 per kW h for non-residential programs. Lighting programs, with an average total cost of $0.018 per kW h, drove lower savings costs in the residential market. We provide estimates for the most common program types and find that program administrators and participants on average are splitting the costs of efficiency in half. More consistent, standardized and complete reporting on efficiency programs is needed. Differing definitions and quantification of costs, savings and savings lifetimes pose challenges for comparing program results. Reducing these uncertainties could increase confidence in efficiency as a resource among planners and policymakers. - Highlights: • The cost of saved energy allows comparisons among energy resource investments. • Findings from the most expansive collection yet of total energy efficiency program costs. • The weighted average total cost of saved electricity was $0.046 for 20 states in 2009–2013. • Averages in the residential and non-residential sectors were $0.030 and $0.053 per kW h, respectively. • Results strongly indicate need for more consistent, reliable and complete reporting on efficiency programs.

  7. Estimation of hospital efficiency--do different definitions and casemix measures for hospital output affect the results?

    Science.gov (United States)

    Vitikainen, Kirsi; Street, Andrew; Linna, Miika

    2009-02-01

    Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.

  8. Efficient bootstrap estimates for tail statistics

    Science.gov (United States)

    Breivik, Øyvind; Aarnes, Ole Johan

    2017-03-01

    Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.

  9. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  10. Estimation of Gasoline Price Elasticities of Demand for Automobile Fuel Efficiency in Korea: A Hedonic Approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)

    2001-03-01

    This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.

  11. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  12. Estimation on separation efficiency of aluminum from base-cap of spent fluorescent lamp in hammer crusher unit.

    Science.gov (United States)

    Rhee, Seung-Whee

    2017-09-01

    In order to separate aluminum from the base-cap of spent fluorescent lamp (SFL), the separation efficiency of hammer crusher unit is estimated by introducing a binary separation theory. The base-cap of SFL is composed by glass fragment, binder, ferrous metal, copper and aluminum. The hammer crusher unit to recover aluminum from the base-cap consists of 3stages of hammer crusher, magnetic separator and vibrating screen. The optimal conditions of rotating speed and operating time in the hammer crusher unit are decided at each stage. At the optimal conditions, the aluminum yield and the separation efficiency of hammer crusher unit are estimated by applying a sequential binary separation theory at each stage. And the separation efficiency between hammer crusher unit and roll crush system is compared to show the performance of aluminum recovery from the base-cap of SFL. Since the separation efficiency can be increased to 99% at stage 3, from the experimental results, it is found that aluminum from the base-cap can be sufficiently recovered by the hammer crusher unit. Copyright © 2017. Published by Elsevier Ltd.

  13. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  14. Avoided cost estimation and post-reform funding allocation for California's energy efficiency programs

    International Nuclear Information System (INIS)

    Baskette, C.; Horii, B.; Price, S.; Kollman, E.

    2006-01-01

    This paper summarizes the first comprehensive estimation of California's electricity avoided costs since the state reformed its electricity market. It describes avoided cost estimates that vary by time and location, thus facilitating targeted design, funding, and marketing of demand-side management (DSM) and energy efficiency (EE) programs that could not have occurred under the previous methodology of system average cost estimation. The approach, data, and results reflect two important market structure changes: (a) wholesale spot and forward markets now supply electricity commodities to load serving entities; and (b) the evolution of an emissions market that internalizes and prices some of the externalities of electricity generation. The paper also introduces the multiplier effect of a price reduction due to DSM/EE implementation on electricity bills of all consumers. It affirms that area- and time-specific avoided cost estimates can improve the allocation of the state's public funding for DSM/EE programs, a finding that could benefit other parts of North America (e.g. Ontario and New York), which have undergone electricity deregulation. (author)

  15. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  16. Estimating the energy and exergy utilization efficiencies for the residential-commercial sector: an application

    International Nuclear Information System (INIS)

    Utlu, Zafer; Hepbasli, Arif

    2006-01-01

    The main objectives in carrying out the present study are twofold, namely to estimate the energy and exergy utilization efficiencies for the residential-commercial sector and to compare those of various countries with each other. In this regard, Turkey is given as an illustrative example with its latest figures in 2002 since the data related to the following years are still being processed. Total energy and exergy inputs in this year are calculated to be 3257.20 and 3212.42 PJ, respectively. Annual fuel consumptions in space heating, water heating and cooking activities as well as electrical energy uses by appliances are also determined. The energy and exergy utilization efficiency values for the Turkish residential-commercial sector are obtained to be 55.58% and 9.33%, respectively. Besides this, Turkey's overall energy and exergy utilization efficiencies are found to be 46.02% and 24.99%, respectively. The present study clearly indicates the necessity of the planned studies toward increasing exergy utilization efficiencies in the sector studied

  17. Estimation of technical efficiency and it's determinants in the hybrid maize production in district chiniot: a cobb douglas model approach

    International Nuclear Information System (INIS)

    Naqvi, S.A.A.; Ashfaq, M.

    2014-01-01

    High yielding crop like maize is very important for countries like Pakistan, which is third cereal crop after wheat and rice. Maize accounts for 4.8 percent of the total cropped area and 4.82 percent of the value of agricultural production. It is grown all over the country but major areas are Sahiwal, Okara and Faisalabad. Chiniot is one of the distinct agroecological domains of central Punjab for the maize cultivation, that's why this district was selected for the study and the technical efficiency of hybrid maize farmers was estimated. The primary data of 120 farmers, 40 farmers from each of the three tehsils of Chiniot were collected in the year 2011. Causes of low yields for various farmers than the others, while using the same input bundle were estimated. The managerial factors causing the inefficiency of production were also measured. The average technical efficiency was estimated to be 91 percent, while it was found to be 94.8, 92.7 and 90.8 for large, medium and small farmers, respectively. Stochastic frontier production model was used to measure technical efficiency. Statistical software Frontier 4.1 was used to analyse the data to generate inferences because the estimates of efficiency were produced as a direct output from package. It was concluded that the efficiency can be enhanced by covering the inefficiency from the environmental variables, farmers personal characteristics and farming conditions. (author)

  18. Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Yang-Yang Dong

    2018-01-01

    Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.

  19. ESTIMATION OF EFFICIENCY OF OPERATING SYSTEM OF TAX PLANNING IN THE COMMERCIAL ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    Evgeniy A. Samsonov

    2014-01-01

    Full Text Available Present clause is devoted to scientific judgement and estimations of efficiency of stimulating mechanisms (tools of application of system of tax planning in the commercial organizations which allow to estimate разнонаправленное influence of taxes on final financial result of the organization, and also to predict change of business activity of the organization depending on tax loading. The big attention is given to complicated questions of features of management by the taxation and the order of reflection in the tax account of the facts of the economic activities arising between the state, on the one hand, and managing subjects - the commercial organizations - with another.

  20. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  1. The estimation of energy efficiency for hybrid refrigeration system

    International Nuclear Information System (INIS)

    Gazda, Wiesław; Kozioł, Joachim

    2013-01-01

    Highlights: ► We present the experimental setup and the model of the hybrid cooling system. ► We examine impact of the operating parameters of the hybrid cooling system on the energy efficiency indicators. ► A comparison of the final and the primary energy use for a combination of the cooling systems is carried out. ► We explain the relationship between the COP and PER values for the analysed cooling systems. -- Abstract: The concept of the air blast-cryogenic freezing method (ABCF) is based on an innovative hybrid refrigeration system with one common cooling space. The hybrid cooling system consists of a vapor compression refrigeration system and a cryogenic refrigeration system. The prototype experimental setup for this method on the laboratory scale is discussed. The application of the results of experimental investigations and the theoretical–empirical model makes it possible to calculate the cooling capacity as well as the final and primary energy use in the hybrid system. The energetic analysis has been carried out for the operating modes of the refrigerating systems for the required temperatures inside the cooling chamber of −5 °C, −10 °C and −15 °C. For the estimation of the energy efficiency the coefficient of performance COP and the primary energy ratio PER for the hybrid refrigeration system are proposed. A comparison of these coefficients for the vapor compression refrigeration and the cryogenic refrigeration system has also been presented.

  2. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  3. Estimation of combustion flue gas acid dew point during heat recovery and efficiency gain

    Energy Technology Data Exchange (ETDEWEB)

    Bahadori, A. [Curtin University of Technology, Perth, WA (Australia)

    2011-06-15

    When cooling combustion flue gas for heat recovery and efficiency gain, the temperature must not be allowed to drop below the sulfur trioxide dew point. Below the SO{sub 3} dew point, very corrosive sulfuric acid forms and leads to operational hazards on metal surfaces. In the present work, simple-to-use predictive tool, which is easier than existing approaches, less complicated with fewer computations is formulated to arrive at an appropriate estimation of acid dew point during combustion flue gas cooling which depends on fuel type, sulfur content in fuel, and excess air levels. The resulting information can then be applied to estimate the acid dew point, for sulfur in various fuels up to 0.10 volume fraction in gas (0.10 mass fraction in liquid), excess air fractions up to 0.25, and elemental concentrations of carbon up to 3. The proposed predictive tool shows a very good agreement with the reported data wherein the average absolute deviation percent was found to be around 3.18%. This approach can be of immense practical value for engineers and scientists for a quick estimation of acid dew point during combustion flue gas cooling for heat recovery and efficiency gain for wide range of operating conditions without the necessity of any pilot plant setup and tedious experimental trials. In particular, process and combustion engineers would find the tool to be user friendly involving transparent calculations with no complex expressions for their applications.

  4. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    Directory of Open Access Journals (Sweden)

    Junpeng Shi

    2017-02-01

    Full Text Available In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS method for two-dimensional direction of arrival (2D DOA estimation with uniform rectangular arrays (URAs in a low-grazing angle (LGA condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  5. Efficient Bayesian estimates for discrimination among topologically different systems biology models.

    Science.gov (United States)

    Hagen, David R; Tidor, Bruce

    2015-02-01

    A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.

  6. OPTIMIZATION OF THE CRITERION FOR ESTIMATING THE TECHNOLOGY EFFICIENCY OF PACKING-CASE-PIECE LOADS DELIVERY

    OpenAIRE

    O. Severyn; O. Shulika

    2017-01-01

    The results of optimization of gravimetric coefficients for indexes included in the integral criterion of estimation of the efficiency of transport-technological charts of cargo delivery are resulted. The values of gravimetric coefficients are determined on the basis of two methods of experimental researches: questioning of respondents among the specialists of motor transport production and imitation design.

  7. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  8. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  9. Scaling gross ecosystem production at Harvard Forest with remote sensing: a comparison of estimates from a constrained quantum-use efficiency model and eddy correlation

    International Nuclear Information System (INIS)

    Waring, R.H.; Law, B.E.; Goulden, M.L.; Bassow, S.L.; McCreight, R.W.; Wofsy, S.C.; Bazzaz, F.A.

    1995-01-01

    Two independent methods of estimating gross ecosystem production (GEP) were compared over a period of 2 years at monthly integrals for a mixed forest of conifers and deciduous hardwoods at Harvard Forest in central Massachusetts. Continuous eddy flux measurements of net ecosystem exchange (NEE) provided one estimate of GEP by taking day to night temperature differences into account to estimate autotrophic and heterotrophic respiration. GEP was also estimated with a quantum efficiency model based on measurements of maximum quantum efficiency (Qmax), seasonal variation in canopy phenology and chlorophyll content, incident PAR, and the constraints of freezing temperatures and vapour pressure deficits on stomatal conductance. Quantum efficiency model estimates of GEP and those derived from eddy flux measurements compared well at monthly integrals over two consecutive years (R 2 = 0–98). Remotely sensed data were acquired seasonally with an ultralight aircraft to provide a means of scaling the leaf area and leaf pigmentation changes that affected the light absorption of photosynthetically active radiation to larger areas. A linear correlation between chlorophyll concentrations in the upper canopy leaves of four hardwood species and their quantum efficiencies (R 2 = 0–99) suggested that seasonal changes in quantum efficiency for the entire canopy can be quantified with remotely sensed indices of chlorophyll. Analysis of video data collected from the ultralight aircraft indicated that the fraction of conifer cover varied from < 7% near the instrument tower to about 25% for a larger sized area. At 25% conifer cover, the quantum efficiency model predicted an increase in the estimate of annual GEP of < 5% because unfavourable environmental conditions limited conifer photosynthesis in much of the non-growing season when hardwoods lacked leaves

  10. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  11. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  12. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  13. Effect of LET on the efficiency of dose re-estimation in LiF using uv photo-transfer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, J A; Baker, D M; Marshall, M; Budd, T [UKAEA Atomic Energy Research Establishment, Harwell. Environmental and Medical Sciences Div.

    1980-09-01

    Glow curves from TLD600 and TLD700 extruded rods exposed to ..gamma..-, X- and neutron radiations have been compared before and after uv photo-transfer. Re-estimation efficiency increases with LET by an amount which varies from batch to batch.

  14. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  15. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    Science.gov (United States)

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  16. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    Science.gov (United States)

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  17. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  18. Estimating the changes in the distribution of energy efficiency in the U.S. automobile assembly industry

    International Nuclear Information System (INIS)

    Boyd, Gale A.

    2014-01-01

    This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications

  19. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  20. Energy-Efficient Channel Estimation in MIMO Systems

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.

  1. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  2. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  3. Oracle Efficient Estimation and Forecasting with the Adaptive LASSO and the Adaptive Group LASSO in Vector Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Callot, Laurent

    We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...

  4. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    Science.gov (United States)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  5. ESTIMATING ORIGIN-DESTINATION MATRICES USING AN EFFICIENT MOTH FLAME-BASED SPATIAL CLUSTERING APPROACH

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  6. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    Science.gov (United States)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  7. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  8. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  9. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  10. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  11. Ultrasound elastography: efficient estimation of tissue displacement using an affine transformation model

    Science.gov (United States)

    Hashemi, Hoda Sadat; Boily, Mathieu; Martineau, Paul A.; Rivaz, Hassan

    2017-03-01

    Ultrasound elastography entails imaging mechanical properties of tissue and is therefore of significant clinical importance. In elastography, two frames of radio-frequency (RF) ultrasound data that are obtained while the tissue is undergoing deformation, and the time-delay estimate (TDE) between the two frames is used to infer mechanical properties of tissue. TDE is a critical step in elastography, and is challenging due to noise and signal decorrelation. This paper presents a novel and robust technique TDE using all samples of RF data simultaneously. We assume tissue deformation can be approximated by an affine transformation, and hence call our method ATME (Affine Transformation Model Elastography). The affine transformation model is utilized to obtain initial estimates of axial and lateral displacement fields. The affine transformation only has six degrees of freedom (DOF), and as such, can be efficiently estimated. A nonlinear cost function that incorporates similarity of RF data intensity and prior information of displacement continuity is formulated to fine-tune the initial affine deformation field. Optimization of this function involves searching for TDE of all samples of the RF data. The optimization problem is converted to a sparse linear system of equations, which can be solved in real-time. Results on simulation are presented for validation. We further collect RF data from in-vivo patellar tendon and medial collateral ligament (MCL), and show that ATME can be used to accurately track tissue displacement.

  12. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential.

    Science.gov (United States)

    Betowski, Don; Bevington, Charles; Allison, Thomas C

    2016-01-19

    Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.

  13. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim

    2016-05-11

    Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The

  14. Improving Global Gross Primary Productivity Estimates by Computing Optimum Light Use Efficiencies Using Flux Tower Data

    Science.gov (United States)

    Madani, Nima; Kimball, John S.; Running, Steven W.

    2017-11-01

    In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.

  15. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  16. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The Estimation Of The Regions’ Efficiency Of The Russian Federation Including The Intellectual Capital, The Characteristics Of Readiness For Innovation, Level Of Well-Being, And Quality Of Life

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2014-12-01

    Full Text Available On the basis of the authors’ methodology, the models of productive potential of the Russian Federation regions, including estimations of intellectual capital, were constructed. It is shown that characteristics of well-being level and quality of life make a significant impact on the regional production’s efficiency. The characteristics of regions’ readiness to innovate are identified, it is possible to name it as a factor of production’s efficiency. It is shown that the inclusion of different factors of efficiency in the production potential model can significantly increase the differentiation of technical efficiency estimates, besides these estimates and their grades depend on a set of efficiency’s factors. On the basis of a comparison of real GRP and boundary GRP ratings, it is identified locally effective regions with a relatively high estimation of efficiency among regions with similar amounts of GRP and locally ineffective regions. It is calculated marginal effects of influence of the efficiency’s factors on the result of industrial activity in the region. It seems constructively to use these estimates while analyzing the prospects for regions’ development, which is based on the possibility of targeting impact on controllable efficiency’s factors. The article is also offered the option of methodology of the public policy efficiency estimation on the knowledge economy formation — an agent-based model for Russia, which is learning the “knowledge economy” sector and considering their relationship with the rest of the macroeconomic system.

  18. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  19. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part II. Uncertainty estimation

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as ±0.12% at 95% level of confidence. Micro-gas chromatography with/without CH 4 quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty ±1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as ±0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within ±1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably

  20. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-08-12

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  1. Efficient collaborative sparse channel estimation in massive MIMO

    KAUST Repository

    Masood, Mudassir; Afify, Laila H.; Al-Naffouri, Tareq Y.

    2015-01-01

    We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.

  2. Econometric estimation of investment utilization, adjustment costs, and technical efficiency in Danish pig farms using hyperbolic distance functions

    DEFF Research Database (Denmark)

    Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund

    2014-01-01

    Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...

  3. Towards the Estimation of an Efficient Benchmark Portfolio: The Case of Croatian Emerging Market

    Directory of Open Access Journals (Sweden)

    Dolinar Denis

    2017-04-01

    Full Text Available The fact that cap-weighted indices provide an inefficient risk-return trade-off is well known today. Various research approaches evolved suggesting alternative to cap-weighting in an effort to come up with a more efficient market index benchmark. In this paper we aim to use such an approach and focus on the Croatian capital market. We apply statistical shrinkage method suggested by Ledoit and Wolf (2004 to estimate the covariance matrix and follow the work of Amenc et al. (2011 to obtain estimates of expected returns that rely on risk-return trade-off. Empirical findings for the proposed portfolio optimization include out-of-sample and robustness testing. This way we compare the performance of the capital-weighted benchmark to the alternative and ensure that consistency is achieved in different volatility environments. Research findings do not seem to support relevant research results for the developed markets but rather complement earlier research (Zoričić et al., 2014.

  4. Investigating time-efficiency of forward masking paradigms for estimating basilar membrane input-output characteristics

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2017-01-01

    -output (I/O) function have been proposed. However, such measures are very time consuming. The present study investigated possible modifications of the temporal masking curve (TMC) paradigm to improve time and measurement efficiency. In experiment 1, estimates of knee point (KP) and compression ratio (CR......”, was tested. In contrast to the standard TMC paradigm, the maker level was kept fixed and the “gap threshold” was obtained, such that the masker just masks a low-level (12 dB sensation level) signal. It is argued that this modification allows for better control of the tested stimulus level range, which...

  5. Metamodel for Efficient Estimation of Capacity-Fade Uncertainty in Li-Ion Batteries for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jaewook Lee

    2015-06-01

    Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.

  6. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  7. Efficiency and abatement costs of energy-related CO2 emissions in China: A slacks-based efficiency measure

    International Nuclear Information System (INIS)

    Choi, Yongrok; Zhang, Ning; Zhou, P.

    2012-01-01

    Highlights: ► We employ a slacks-based DEA model to estimate the energy efficiency and shadow prices of CO 2 emissions in China. ► The empirical study shows that China was not performing CO 2 -efficiently. ► The average of estimated shadow prices of CO 2 emissions is about $7.2. -- Abstract: This paper uses nonparametric efficiency analysis technique to estimate the energy efficiency, potential emission reductions and marginal abatement costs of energy-related CO 2 emissions in China. We employ a non-radial slacks-based data envelopment analysis (DEA) model for estimating the potential reductions and efficiency of CO 2 emissions for China. The dual model of the slacks-based DEA model is then used to estimate the marginal abatement costs of CO 2 emissions. An empirical study based on China’s panel data (2001–2010) is carried out and some policy implications are also discussed.

  8. An estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS)

    Science.gov (United States)

    Chatterjee, Sharmista; Seagrave, Richard C.

    1993-01-01

    The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the

  9. Estimation of the potential efficiency of a multijunction solar cell at a limit balance of photogenerated currents

    Energy Technology Data Exchange (ETDEWEB)

    Mintairov, M. A., E-mail: mamint@mail.ioffe.ru; Evstropov, V. V.; Mintairov, S. A.; Shvarts, M. Z.; Timoshina, N. Kh.; Kalyuzhnyy, N. A. [Russian Academy of Sciences, Ioffe Physical-Technical Institute (Russian Federation)

    2015-05-15

    A method is proposed for estimating the potential efficiency which can be achieved in an initially unbalanced multijunction solar cell by the mutual convergence of photogenerated currents: to extract this current from a relatively narrow band-gap cell and to add it to a relatively wide-gap cell. It is already known that the properties facilitating relative convergence are inherent to such objects as bound excitons, quantum dots, donor-acceptor pairs, and others located in relatively wide-gap cells. In fact, the proposed method is reduced to the problem of obtaining such a required light current-voltage (I–V) characteristic which corresponds to the equality of all photogenerated short-circuit currents. Two methods for obtaining the required light I–V characteristic are used. The first one is selection of the spectral composition of the radiation incident on the multijunction solar cell from an illuminator. The second method is a double shift of the dark I–V characteristic: a current shift J{sub g} (common set photogenerated current) and a voltage shift (−J{sub g}R{sub s}), where R{sub s} is the series resistance. For the light and dark I–V characteristics, a general analytical expression is derived, which considers the effect of so-called luminescence coupling in multijunction solar cells. The experimental I–V characteristics are compared with the calculated ones for a three-junction InGaP/GaAs/Ge solar cell with R{sub s} = 0.019 Ω cm{sup 2} and a maximum factual efficiency of 36.9%. Its maximum potential efficiency is estimated as 41.2%.

  10. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel

    2018-05-22

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  12. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel; Han, Fei; Lubineau, Gilles

    2018-01-01

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  13. Drainage estimation to aquifer and water use irrigation efficiency in semi-arid zone for a long period of time

    Science.gov (United States)

    Jiménez-Martínez, J.; Molinero-Huguet, J.; Candela, L.

    2009-04-01

    Water requirements for different crop types according to soil type and climate conditions play not only an important role in agricultural efficiency production, though also for water resources management and control of pollutants in drainage water. The key issue to attain these objectives is the irrigation efficiency. Application of computer codes for irrigation simulation constitutes a fast and inexpensive approach to study optimal agricultural management practices. To simulate daily water balance in the soil, vadose zone and aquifer the VisualBALAN V. 2.0 code was applied to an experimental area under irrigation characterized by its aridity. The test was carried out in three experimental plots for annual row crops (lettuce and melon), perennial vegetables (artichoke), and fruit trees (citrus) under common agricultural practices in open air for October 1999-September 2008. Drip irrigation was applied to crops production due to the scarcity of water resources and the need for water conservation. Water level change was monitored in the top unconfined aquifer for each experimental plot. Results of water balance modelling show a good agreement between observed and estimated water level values. For the study period, mean drainage obtained values were 343 mm, 261 mm and 205 mm for lettuce and melon, artichoke and citrus respectively. Assessment of water use efficiency was based on the IE indicator proposed by the ASCE Task Committee. For the modelled period, water use efficiency was estimated as 73, 71 and 78 % of the applied dose (irrigation + precipitation) for lettuce and melon, artichoke and citrus, respectively.

  14. An estimation of the energy and exergy efficiencies for the energy resources consumption in the transportation sector in Malaysia

    International Nuclear Information System (INIS)

    Saidur, R.; Sattar, M.A.; Masjuki, H.H.; Ahmed, S.; Hashim, U.

    2007-01-01

    The purpose of this work is to apply the useful energy and exergy analysis models for different modes of transport in Malaysia and to compare the result with a few countries. In this paper, energy and exergy efficiencies of the various sub-sectors are presented by considering the energy and exergy flows from 1995 to 2003. Respective flow diagrams to find the overall energy and exergy efficiencies of Malaysian transportation sector are also presented. The estimated overall energy efficiency ranges from 22.74% (1999) to 22.98% (1998) with a mean of 22.82+/-0.06% and that of overall exergy efficiency ranges from 22.44% (2000) to 22.82% (1998) with a mean of 22.55+/-0.12%. The results are compared with respect to present energy and exergy efficiencies in each sub-sector. The transportation sector used about 40% of the total energy consumed in 2002. Therefore, it is important to identify the energy and exergy flows and the pertinent losses. The road sub-sector has appeared to be the most efficient one compared to the air and marine sub-sectors. Also found that the energy and exergy efficiencies of Malaysian transportation sector are lower than that of Turkey but higher than Norway

  15. Estimation of gas turbine blades cooling efficiency

    NARCIS (Netherlands)

    Moskalenko, A.B.; Kozhevnikov, A.

    2016-01-01

    This paper outlines the results of the evaluation of the most thermally stressed gas turbine elements, first stage power turbine blades, cooling efficiency. The calculations were implemented using a numerical simulation based on the Finite Element Method. The volume average temperature of the blade

  16. The Use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm

    International Nuclear Information System (INIS)

    Sisworo, Elsje L; Sisworo, Widjang H; Havid-Rasjid; Haryanto; Syamsul-Rizal; Poeloengan, Z; Kusnu-Martoyo

    2004-01-01

    Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH 2 32 PO 4 solution was used to determine the active root-zone of oil palm by applying 32 P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15 N Ammonium Sulphate was used by adding them at the same amount of 16 g 15 N plan -1 . This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32 P was able to distinguish several root zones: 1.5 m - 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone. (author)

  17. Spectral and Energy Efficient Low-Overhead Uplink and Downlink Channel Estimation for 5G Massive MIMO Systems

    Directory of Open Access Journals (Sweden)

    Imran Khan

    2018-01-01

    Full Text Available Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall system efficiency. The purpose of this paper is to decrease the channel estimation overhead by taking the advantage of sparse attributes and also to optimize the Energy Efficiency (EE of the system. To cope with this issue, we propose a novel approach by using Compressed-Sensing (CS, Block Iterative-Support-Detection (Block-ISD, Angle-of-Departure (AoD and Structured Compressive Sampling Matching Pursuit (S-CoSaMP algorithms to reduce the channel estimation overhead and compare them with the traditional algorithms. The CS uses temporal-correlation of time-varying channels to produce Differential-Channel Impulse Response (DCIR among two CIRs that are adjacent in time-slots. DCIR has greater sparsity than the conventional CIRs as it can be easily compressed. The Block-ISD uses spatial-correlation of the channels to obtain the block-sparsity which results in lower pilot-overhead. AoD quantizes the channels whose path-AoDs variation is slower than path-gains and such information is utilized for reducing the overhead. S-CoSaMP deploys structured-sparsity to obtain reliable Channel-State-Information (CSI. MATLAB simulation results show that the proposed CS based algorithms reduce the feedback and pilot-overhead by a significant percentage and also improve the system capacity as compared with the traditional algorithms. Moreover, the EE level increases with increasing Base Station (BS density, UE density and lowering hardware impairments level.

  18. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    Energy Technology Data Exchange (ETDEWEB)

    Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?

  19. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    Science.gov (United States)

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    Science.gov (United States)

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  1. Non-invasive estimation of myocardial efficiency using positron emission tomography and carbon-11 acetate - comparison between the normal and failing human heart

    International Nuclear Information System (INIS)

    Bengel, F.M.; Nekolla, S.; Schwaiger, M.; Ungerer, M.

    2000-01-01

    We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11 C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11 C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%±10% and end-diastolic volume was 92±28 ml/m 2 (vs 64%±7% and 55±8 ml/m 2 in normals, P 2 ; P 6 mmHg x ml/m 2 ; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)

  2. Influence of Plot Size on Efficiency of Biomass Estimates in Inventories of Dry Tropical Forests Assisted by Photogrammetric Data from an Unmanned Aircraft System

    Directory of Open Access Journals (Sweden)

    Daud Jones Kachamba

    2017-06-01

    Full Text Available Applications of unmanned aircraft systems (UASs to assist in forest inventories have provided promising results in biomass estimation for different forest types. Recent studies demonstrating use of different types of remotely sensed data to assist in biomass estimation have shown that accuracy and precision of estimates are influenced by the size of field sample plots used to obtain reference values for biomass. The objective of this case study was to assess the influence of sample plot size on efficiency of UAS-assisted biomass estimates in the dry tropical miombo woodlands of Malawi. The results of a design-based field sample inventory assisted by three-dimensional point clouds obtained from aerial imagery acquired with a UAS showed that the root mean square errors as well as the standard error estimates of mean biomass decreased as sample plot sizes increased. Furthermore, relative efficiency values over different sample plot sizes were above 1.0 in a design-based and model-assisted inferential framework, indicating that UAS-assisted inventories were more efficient than purely field-based inventories. The results on relative costs for UAS-assisted and pure field-based sample plot inventories revealed that there is a trade-off between inventory costs and required precision. For example, in our study if a standard error of less than approximately 3 Mg ha−1 was targeted, then a UAS-assisted forest inventory should be applied to ensure more cost effective and precise estimates. Future studies should therefore focus on finding optimum plot sizes for particular applications, like for example in projects under the Reducing Emissions from Deforestation and Forest Degradation, plus forest conservation, sustainable management of forest and enhancement of carbon stocks (REDD+ mechanism with different geographical scales.

  3. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    Science.gov (United States)

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  4. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  5. Energy efficiency in Swedish industry

    International Nuclear Information System (INIS)

    Zhang, Shanshan; Lundgren, Tommy; Zhou, Wenchao

    2016-01-01

    This paper assesses energy efficiency in Swedish industry. Using unique firm-level panel data covering the years 2001–2008, the efficiency estimates are obtained for firms in 14 industrial sectors by using data envelopment analysis (DEA). The analysis accounts for multi-output technologies where undesirable outputs are produced alongside with the desirable output. The results show that there was potential to improve energy efficiency in all the sectors and relatively large energy inefficiencies existed in small energy-use industries in the sample period. Also, we assess how the EU ETS, the carbon dioxide (CO_2) tax and the energy tax affect energy efficiency by conducting a second-stage regression analysis. To obtain consistent estimates for the regression model, we apply a modified, input-oriented version of the double bootstrap procedure of Simar and Wilson (2007). The results of the regression analysis reveal that the EU ETS and the CO_2 tax did not have significant influences on energy efficiency in the sample period. However, the energy tax had a positive relation with the energy efficiency. - Highlights: • We use DEA to estimate firm-level energy efficiency in Swedish industry. • We examine impacts of climate and energy policies on energy efficiency. • The analyzed policies are Swedish carbon and energy taxes and the EU ETS. • Carbon tax and EU ETS did not have significant influences on energy efficiency. • The energy tax had a positive relation with energy efficiency.

  6. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  7. An Efficient Estimation Method for Reducing the Axial Intensity Drop in Circular Cone-Beam CT

    Directory of Open Access Journals (Sweden)

    Lei Zhu

    2008-01-01

    Full Text Available Reconstruction algorithms for circular cone-beam (CB scans have been extensively studied in the literature. Since insufficient data are measured, an exact reconstruction is impossible for such a geometry. If the reconstruction algorithm assumes zeros for the missing data, such as the standard FDK algorithm, a major type of resulting CB artifacts is the intensity drop along the axial direction. Many algorithms have been proposed to improve image quality when faced with this problem of data missing; however, development of an effective and computationally efficient algorithm remains a major challenge. In this work, we propose a novel method for estimating the unmeasured data and reducing the intensity drop artifacts. Each CB projection is analyzed in the Radon space via Grangeat's first derivative. Assuming the CB projection is taken from a parallel beam geometry, we extract those data that reside in the unmeasured region of the Radon space. These data are then used as in a parallel beam geometry to calculate a correction term, which is added together with Hu’s correction term to the FDK result to form a final reconstruction. More approximations are then made on the calculation of the additional term, and the final formula is implemented very efficiently. The algorithm performance is evaluated using computer simulations on analytical phantoms. The reconstruction comparison with results using other existing algorithms shows that the proposed algorithm achieves a superior performance on the reduction of axial intensity drop artifacts with a high computation efficiency.

  8. Efficiency in the Worst Production Situation Using Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Md. Kamrul Hossain

    2013-01-01

    Full Text Available Data envelopment analysis (DEA measures relative efficiency among the decision making units (DMU without considering noise in data. The least efficient DMU indicates that it is in the worst situation. In this paper, we measure efficiency of individual DMU whenever it losses the maximum output, and the efficiency of other DMUs is measured in the observed situation. This efficiency is the minimum efficiency of a DMU. The concept of stochastic data envelopment analysis (SDEA is a DEA method which considers the noise in data which is proposed in this study. Using bounded Pareto distribution, we estimate the DEA efficiency from efficiency interval. Small value of shape parameter can estimate the efficiency more accurately using the Pareto distribution. Rank correlations were estimated between observed efficiencies and minimum efficiency as well as between observed and estimated efficiency. The correlations are indicating the effectiveness of this SDEA model.

  9. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  10. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    Science.gov (United States)

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  11. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  12. Efficient estimates of cochlear hearing loss parameters in individual listeners

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten

    2013-01-01

    It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...

  13. Review of Evaluation, Measurement and Verification Approaches Used to Estimate the Load Impacts and Effectiveness of Energy Efficiency Programs

    Energy Technology Data Exchange (ETDEWEB)

    Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.

    2010-04-14

    Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy

  14. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    Science.gov (United States)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  15. Thermodynamic framework for estimating the efficiencies of alkaline batteries

    Energy Technology Data Exchange (ETDEWEB)

    Pound, B G; Singh, R P; MacDonald, D D

    1986-06-01

    A thermodynamic framework has been developed to evaluate the efficiencies of alkaline battery systems for electrolyte (MOH) concentrations from 1 to 8 mol kg/sup -1/ and over the temperature range -10 to 120/sup 0/C. An analysis of the thermodynamic properties of concentrated LiOH, NaOH, and KOH solutions was carried out to provide data for the activity of water, the activity coefficient of the hydroxide ion, and the pH of the electrolyte. Potential-pH relations were then derived for various equilibrium phenomena for the metals Li, Al, Fe, Ni, and Zn in aqueous solutions and, using the data for the alkali metal hydroxides, equilibrium potentials were computed as a function of composition and temperature. These data were then used to calculate reversible cell voltages for a number of battery systems, assuming a knowledge of the cell reactions. Finally, some of the calculated cell voltages were compared with observed cell voltages to compute voltage efficiencies for various alkaline batteries. The voltage efficiencies of H/sub 2//Ni, Fe/Ni, and Zn/Ni test cells were found to be between 90 and 100%, implying that, at least at open circuit, there is little, if any, contribution from parasitic redox couples to the cell potentials for these systems. The efficiency of an Fe/air test cell was relatively low (72%). This is probably due to the less-than-theoretical voltage of the air electrode.

  16. A Production Efficiency Model-Based Method for Satellite Estimates of Corn and Soybean Yields in the Midwestern US

    Directory of Open Access Journals (Sweden)

    Andrew E. Suyker

    2013-11-01

    Full Text Available Remote sensing techniques that provide synoptic and repetitive observations over large geographic areas have become increasingly important in studying the role of agriculture in global carbon cycles. However, it is still challenging to model crop yields based on remotely sensed data due to the variation in radiation use efficiency (RUE across crop types and the effects of spatial heterogeneity. In this paper, we propose a production efficiency model-based method to estimate corn and soybean yields with MODerate Resolution Imaging Spectroradiometer (MODIS data by explicitly handling the following two issues: (1 field-measured RUE values for corn and soybean are applied to relatively pure pixels instead of the biome-wide RUE value prescribed in the MODIS vegetation productivity product (MOD17; and (2 contributions to productivity from vegetation other than crops in mixed pixels are deducted at the level of MODIS resolution. Our estimated yields statistically correlate with the national survey data for rainfed counties in the Midwestern US with low errors for both corn (R2 = 0.77; RMSE = 0.89 MT/ha and soybeans (R2 = 0.66; RMSE = 0.38 MT/ha. Because the proposed algorithm does not require any retrospective analysis that constructs empirical relationships between the reported yields and remotely sensed data, it could monitor crop yields over large areas.

  17. Efficiency in Microfinance Cooperatives

    Directory of Open Access Journals (Sweden)

    HARTARSKA, Valentina

    2012-12-01

    Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.

  18. Efficiency profile method to study the hit efficiency of drift chambers

    International Nuclear Information System (INIS)

    Abyzov, A.; Bel'kov, A.; Lanev, A.; Spiridonov, A.; Walter, M.; Hulsbergen, W.

    2002-01-01

    A method based on the usage of efficiency profile is proposed to estimate the hit efficiency of drift chambers with a large number of channels. The performance of the method under real conditions of the detector operation has been tested analysing the experimental data from the HERA-B drift chambers

  19. An Integrated Approach for Estimating the Energy Efficiency of Seventeen Countries

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2017-10-01

    Full Text Available Increased energy efficiency is one of the most effective ways to achieve climate change mitigation. This study aims to evaluate the energy efficiency of seventeen countries. The evaluation is based on an integrated method that combines the super slack-based (super SBM model and the Malmquist productivity index (MPI to investigate the energy efficiency of seventeen countries during the period of 2010–2015. The results in this study are that the United States, Columbia, Japan, China, and Saudi Arabia perform the best in energy efficiency, whereas Brazil, Russia, Indonesia, and India perform the worst during the entire sample period. The energy efficiency of these countries arrived mainly from technological improvement. The study provides suggestions for the seventeen countries’ government to control the energy consumption and contribute to environmental protection.

  20. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)

    2017-10-05

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.

  1. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  2. Impact of reduced marker set estimation of genomic relationship matrices on genomic selection for feed efficiency in Angus cattle

    Directory of Open Access Journals (Sweden)

    Northcutt Sally L

    2010-04-01

    Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.

  3. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  4. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  5. Efficiency model of Russian banks

    OpenAIRE

    Pavlyuk, Dmitry

    2006-01-01

    The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.

  6. Multi-directional program efficiency

    DEFF Research Database (Denmark)

    Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth

    2016-01-01

    The present paper analyses both managerial and program efficiencies of Lithuanian family farms, in the tradition of Charnes et al. (Manag Sci 27(6):668–697, 1981) but with the important difference that multi-directional efficiency analysis rather than the traditional data envelopment analysis...... approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....

  7. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  8. 高效超分辨波达方向估计算法综述%Overview of efficient algorithms for super-resolution DOA estimates

    Institute of Scientific and Technical Information of China (English)

    闫锋刚; 沈毅; 刘帅; 金铭; 乔晓林

    2015-01-01

    Computationally efficient methods for super-resolution direction of arrival (DOA)estimation aim to reduce the complexity of conventional techniques,to economize on the costs of systems and to enhance the ro-bustness of DOA estimators against array geometries and other environmental restrictions,which has been an important topic in the field.According to the theory and elements of the multiple signal classification (MUSIC) algorithm and the primary derivations from MUSIC,state-of-the-art efficient super-resolution DOA estimators are classified into five different types.These five types of approaches reduce the complexity by real-valued com-putation,beam-space transformation,fast subspace estimation,rapid spectral search,and no spectral search, respectively.With such a classification,comprehensive overviews of each kind of efficient methods are given and numerical comparisons among these estimators are also conducted to illustrate their advantages.Future develop-ment trends of efficient algorithms for super-resolution DOA estimates are finally predicted with basic require-ments of real-world applications.%高效超分辨波达方向估计算法致力于降低超分辨算法的计算量、节约系统的实现成本、弱化算法对于阵列结构的依赖性,是推进超分辨理论工程化的一个重要研究课题。从多重信号分类(multiple signal classifi-cation,MUSIC)算法的原理和构成要素入手,以基于 MUSIC 派生高效超分辨算法的目的和方法为标准,将现存高效超分辨算法划分为实值运算、波束域变换、快速子空间估计、快速峰值搜索和免峰值搜索5大类。在此基础上,全面回顾总结了各类高效算法的发展历程和最新进展,对比分析了它们的主要优缺点。最后,结合空间谱估计实际工程化的应用需求,指出了高效超分辨算法的未来发展趋势。

  9. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    CERN Document Server

    Rao, D V; Brunetti, A; Gigante, G E; Takeda, T; Itai, Y; Akatsuka, T

    2002-01-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K alpha radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  10. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Rao, D.V.; Cesareo, R.; Brunetti, A. [Sassari University, Istituto di Matematica e Fisica (Italy); Gigante, G.E. [Roma Universita, Dipt. di Fisica (Italy); Takeda, T.; Itai, Y. [Tsukuba Univ., Ibaraki (Japan). Inst. of Clinical Medicine; Akatsuka, T. [Yamagata Univ., Yonezawa (Japan). Faculty of Engineering

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic K{alpha} radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. (authors)

  11. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    Science.gov (United States)

    Rao, D. V.; Cesareo, R.; Brunetti, A.; Gigante, G. E.; Takeda, T.; Itai, Y.; Akatsuka, T.

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic Kα radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work.

  12. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  13. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    Science.gov (United States)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  14. 16 CFR 305.5 - Determinations of estimated annual energy consumption, estimated annual operating cost, and...

    Science.gov (United States)

    2010-01-01

    ... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND... § 305.5 Determinations of estimated annual energy consumption, estimated annual operating cost, and...

  15. Efficiencies of Internet-based digital and paper-based scientific surveys and the estimated costs and time for different-sized cohorts.

    Directory of Open Access Journals (Sweden)

    Constantin E Uhlig

    Full Text Available To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts.Invitations to participate in a survey were distributed via e-mail to employees of two university hospitals (E1 and E2 and to members of a medical association (E3, as a link placed in a special text on the municipal homepage regularly read by the administrative employees of two cities (H1 and H2, and paper-based to workers at an automobile enterprise (P1 and college (P2 and senior (P3 students. The main parameters analyzed included the numbers of invited and actual participants, and the time and cost to complete the survey. Statistical analysis was descriptive, except for the Kruskal-Wallis-H-test, which was used to compare the three recruitment methods. Cost efficiencies were compared and extrapolated to different-sized cohorts.The ratios of completely answered questionnaires to distributed questionnaires were between 81.5% (E1 and 97.4% (P2. Between 6.4% (P1 and 57.0% (P2 of the invited participants completely answered the questionnaires. The costs per completely answered questionnaire were $0.57-$1.41 (E1-3, $1.70 and $0.80 for H1 and H2, respectively, and $3.36-$4.21 (P1-3. Based on our results, electronic surveys with 10, 20, 30, or 42 questions would be estimated to be most cost (and time efficient if more than 101.6-225.9 (128.2-391.7, 139.8-229.2 (93.8-193.6, 165.8-230.6 (68.7-115.7, or 188.2-231.5 (44.4-72.7 participants were required, respectively.The study efficiency depended on the technical modalities of the survey methods and engagement of the participants. Depending on our study design, our results suggest that in similar projects that will certainly have more than two to three hundred required participants, the most efficient way of conducting a questionnaire-based survey is likely via the Internet with a digital questionnaire, specifically via a centralized e-mail.

  16. Relationship between competition and efficiency in the Czech banking industry

    Directory of Open Access Journals (Sweden)

    Iveta Řepková

    2013-01-01

    Full Text Available The aim of the paper is to estimate the relationship between competition and efficiency in the Czech banking industry in the period 2001–2010. The theoretical definition and literature review of the relationship between banking competition and efficiency is included. Lerner index and Data Envelopment Analysis were used to estimate the degree of competition and efficiency in the Czech banking sector. The market structure of the Czech banking industry was estimated as a monopolistic competition and it was found a slight increase in the competition in the banking sector. The efficiency of the Czech banks increased in the analysed period. Using a Johansen cointegration test, the paper contributes to the empirical literature, testing not only the causality running from competition to efficiency, but also the reverse effect running from efficiency to competition. The positive relationship between competition and efficiency was estimated in the Czech banking sector. These findings are in line with the Quiet Life Hypothesis and the suggestions that the increase of the competition will contribute to efficiency.

  17. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  18. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  19. On the Optimality of Multivariate S-Estimators

    NARCIS (Netherlands)

    Croux, C.; Dehon, C.; Yadine, A.

    2010-01-01

    In this paper we maximize the efficiency of a multivariate S-estimator under a constraint on the breakdown point. In the linear regression model, it is known that the highest possible efficiency of a maximum breakdown S-estimator is bounded above by 33% for Gaussian errors. We prove the surprising

  20. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  1. Economics of appliance efficiency

    International Nuclear Information System (INIS)

    Tiedemann, K.H.

    2009-01-01

    Several significant developments occurred in 2001 that affect the impact of market transformation programs. This paper presented and applied an econometric approach to the identification and estimation of market models for refrigerators, clothes washers, dishwashers and room air conditioners. The purpose of the paper was to understand the impact of energy conservation policy developments on sales of energy efficient appliances. The paper discussed the approach with particular reference to building a database of sales and drivers of sales using publicly available information; estimation of the determinants of sales using econometric models; and estimation of the individual impacts of prices, gross domestic product (GDP) and energy conservation policies on sales using regression results. Market and policy developments were also presented, such as change a light, save the world promotion; the California energy crisis; and the Pacific Northwest drought induced hydro power shortage. It was concluded that an increase in GDP increased the sales of both more efficient and less efficient refrigerators, clothes washers, dishwashers, and room air conditioners. An increase in electricity price increased sales of Energy Star refrigerators, clothes washers, dishwashers, and room air conditioners. 4 refs., 8 tabs.

  2. Typology of efficiency of functioning of enterprise

    Directory of Open Access Journals (Sweden)

    I.I. Svitlyshyn

    2015-03-01

    Full Text Available Measuring and estimation of the efficiency of functioning of enterprises of agrarian sector traditionally performed by applying only some of its types, which focuses mainly on operating activity. Investment and financial activity, as inalienable constituent of economic process of enterprise, remain regardless thus. In addition, in scientific literature and practical activity to research of efficiency focuses on the stages «production-exchange». The stages of «distribution» and «consumption» at the level of enterprise are not examined. This distorts the results of measuring and estimation of efficiency and makes uneffective proposals for its growth. Coming from what, approach is worked out to determination and systematization of basic types of efficiency of functioning of enterprises of agrarian sector. Approach is based on the offered model, that system represents all stages and types of economic activity of the enterprise. The basic lines of efficiency are interpreted on every stage and in the cut of types of economic activity of enterprise. It allows to provide a complexity and system during its measuring and estimation.

  3. Sparse DOA estimation with polynomial rooting

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren

    2015-01-01

    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  4. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  5. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  6. Agent-based Security and Efficiency Estimation in Airport Terminals

    NARCIS (Netherlands)

    Janssen, S.A.M.

    We investigate the use of an Agent-based framework to identify and quantify the relationship between security and efficiency within airport terminals. In this framework, we define a novel Security Risk Assessment methodology that explicitly models attacker and defender behavior in a security

  7. Plant Friendly Input Design for Parameter Estimation in an Inertial System with Respect to D-Efficiency Constraints

    Directory of Open Access Journals (Sweden)

    Wiktor Jakowluk

    2014-11-01

    Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content  from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.

  8. On the Use of Student Data in Efficiency Analysis--Technical Efficiency in Swedish Upper Secondary School

    Science.gov (United States)

    Waldo, Staffan

    2007-01-01

    While individual data form the base for much empirical analysis in education, this is not the case for analysis of technical efficiency. In this paper, efficiency is estimated using individual data which is then aggregated to larger groups of students. Using an individual approach to technical efficiency makes it possible to carry out studies on a…

  9. An improved routine for the fast estimate of ion cyclotron heating efficiency in tokamak plasmas

    International Nuclear Information System (INIS)

    Brambilla, M.

    1992-02-01

    The subroutine ICEVAL for the rapid simulation of Ion Cyclotron Heating in tokamak plasmas is based on analytic estimates of the wave behaviour near resonances, and on drastic but reasonable simplifications of the real geometry. The subroutine has been rewritten to improve the model and to facilitate its use as input in transport codes. In the new version the influence of quasilinear minority heating on the damping efficiency is taken into account using the well-known Stix analytic approximation. Among other improvements are: a) the possibility of considering plasmas with more than two ion species; b) inclusion of Landau, Transit Time and collisional damping on the electrons non localised at resonances; c) better models for the antenna spectrum and for the construction of the power deposition profiles. The results of ICEVAL are compared in detail with those of the full-wave code FELICE for the case of Hydrogen minority heating in a Deuterium plasma; except for details which depend on the excitation of global eigenmodes, agreement is excellent. ICEVAL is also used to investigate the enhancement of the absorption efficiency due to quasilinear heating of the minority ions. The effect is a strongly non-linear function of the available power, and decreases rapidly with increasing concentration. For parameters typical of Asdex Upgrade plasmas, about 4 MW are required to produce a significant increase of the single-pass absorption at concentrations between 10 and 20%. (orig.)

  10. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    Energy Technology Data Exchange (ETDEWEB)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Verhaegen, F. [Department of Radiation Oncology - MAASTRO, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada); Jaffray, D. A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2015-01-15

    suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.

  11. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  12. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  13. Estimating the potential for electricity savings in households

    International Nuclear Information System (INIS)

    Boogen, Nina

    2017-01-01

    Improving efficiency in the use of energy is an important goal for many nations since end-use energy efficiency can help to reduce CO_2 emissions. Furthermore, since the residential sector in industrialised countries requires around one third of the end-use electricity, it is important for policy makers to estimate the scope for electricity saving in households to reduce electricity consumption by using appropriate steering mechanisms. We estimate the level of technical efficiency in the use of electricity using data from a Swiss household survey. We find an average inefficiency in electricity use by Swiss households of around 20 to 25%. Bottom-up economic-engineering models estimate the potential in Switzerland to be around 15%. In this paper we use a sub-vector input distance frontier function based on economic foundations. Our estimates lie at the upper end of the electricity saving potential estimated by the afore-mentioned economic-engineering approach. - Highlights: • We estimate the level of efficiency in the use of electricity by Swiss households. • We apply a non-radial input distance function and stochastic frontier methods. • We use data from two waves of a Swiss household survey conducted in 2005 and 2011. • We find an inefficiency in the use of electricity of around 20–25%.

  14. Studying the Efficiency of Industrial Dairy Farms of Saqqez and Divandarreh Cities: Using Super-Efficiency Approach

    Directory of Open Access Journals (Sweden)

    S.J. Mohammadi

    2016-03-01

    Full Text Available Introduction: In the developed world, particularly in developing countries, livestock is the most important agricultural sub-sector.Livestock of primary and secondary industries has an especial place in the national economybecause of their greatvalue of products, creating job opportunities, providing health products for consumers, increasing export income of the economy throughaccessing global markets of livestock products and finally their undeniable role in acquiring food security.The demand for milk in Iran increased due to an increase in population and the amount of milk production was also increased. The great share of increased produced milk goes to the industrial dairy farms. One of the major methods to increase the amount of milk production continually is to make its production efficient and improve economic conditions. The current study attempts to determine the efficiency and ranking of industrial dairy farms in Saqqez and Divandarreh cities using super-efficiency model. Materials and Methods: The statistical populations of the study are all active industrial dairy farms of Saqqez and Divandarreh cities which are about 19 farms. The required data for calculating the efficiency were gathered by surveying and completing questionnaires for the year 2013. In this study first, for each farm Data Envelopment Analysis (DEA method and GAMS software package were used to estimate super efficiency. Super efficiency is a form of modified DEA model in which each farm can get an efficiency greater than one. Then in order to make sure about being unbiased the obtained super-efficiency scores, the modified model of Banker and Gifford, was re-estimated and the conventional efficiency scores of farms were compared by normalizing and removing some of the scores of outlier farm based on pre-selected screens. The model has suggested conditions for which some of the estimates for dairy farms might have been contaminated with error.As a t result, it has been

  15. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  16. A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity

    Directory of Open Access Journals (Sweden)

    Marcelo E. Cassiolato

    2002-06-01

    Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu

  17. Online wave estimation using vessel motion measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2018-01-01

    parameters and motion transfer functions are required as input. Apart from this the method is signal-based, with no assumptions on the wave spectrum shape, and as a result it is computationally efficient. The algorithm is implemented in a dynamic positioning (DP)control system, and tested through simulations......In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel...

  18. High-Level Design Space and Flexibility Exploration for Adaptive, Energy-Efficient WCDMA Channel Estimation Architectures

    Directory of Open Access Journals (Sweden)

    Zoltán Endre Rákossy

    2012-01-01

    Full Text Available Due to the fast changing wireless communication standards coupled with strict performance constraints, the demand for flexible yet high-performance architectures is increasing. To tackle the flexibility requirement, software-defined radio (SDR is emerging as an obvious solution, where the underlying hardware implementation is tuned via software layers to the varied standards depending on power-performance and quality requirements leading to adaptable, cognitive radio. In this paper, we conduct a case study for representatives of two complexity classes of WCDMA channel estimation algorithms and explore the effect of flexibility on energy efficiency using different implementation options. Furthermore, we propose new design guidelines for both highly specialized architectures and highly flexible architectures using high-level synthesis, to enable the required performance and flexibility to support multiple applications. Our experiments with various design points show that the resulting architectures meet the performance constraints of WCDMA and a wide range of options are offered for tuning such architectures depending on power/performance/area constraints of SDR.

  19. Validation of an efficient visual method for estimating leaf area index ...

    African Journals Online (AJOL)

    This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...

  20. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  1. Regional and global exergy and energy efficiencies

    Energy Technology Data Exchange (ETDEWEB)

    Nakicenovic, N; Kurz, R [International Inst. for Applied Systems Analysis, Laxenburg (Austria). Environmentally Compatible Energy Strategies (Ecuador) Project; Gilli, P V [Graz Univ. of Technology (Austria)

    1996-03-01

    We present estimates of global energy efficiency by applying second-law (exergy) analysis to regional and global energy balances. We use a uniform analysis of national and regional energy balances and aggregate these balances first for three main economic regions and subsequently into world totals. The procedure involves assessment of energy and exergy efficiencies at each step of energy conversion, from primary exergy to final and useful exergy. Ideally, the analysis should be extended to include actual delivered energy services; unfortunately, data are scarce and only rough estimates can be given for this last stage of energy conversion. The overall result is that the current global primary to useful exergy efficiency is about one-tenth of the theoretical maximum and the service efficiency is even lower. (Author)

  2. Doubly Robust Estimation of Optimal Dynamic Treatment Regimes

    DEFF Research Database (Denmark)

    Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne

    2014-01-01

    We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...

  3. Measuring economy-wide energy efficiency performance: A parametric frontier approach

    International Nuclear Information System (INIS)

    Zhou, P.; Ang, B.W.; Zhou, D.Q.

    2012-01-01

    This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.

  4. Efficiency in the Community College Sector: Stochastic Frontier Analysis

    Science.gov (United States)

    Agasisti, Tommaso; Belfield, Clive

    2017-01-01

    This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…

  5. The Efficiency of OLS Estimators of Structural Parameters in a Simple Linear Regression Model in the Calibration of the Averages Scheme

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.

  6. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  7. Methodical Approach to Estimation of Energy Efficiency Parameters of the Economy Under the Structural Changes in the Fuel And Energy Balance (on the Example of Baikal Region

    Directory of Open Access Journals (Sweden)

    Boris Grigorievich Saneev

    2013-12-01

    Full Text Available The authors consider a methodical approach which allows estimating energy efficiency parameters of the region’s economy using a fuel and energy balance (FEB. This approach was tested on the specific case of Baikal region. During the testing process the authors have developed ex ante and ex post FEBs and estimated energy efficiency parameters such as energy-, electro- and heat capacity of GRP, coefficients of useful utilization of fuel and energy resources and a monetary version of FEB. Forecast estimations are based on assumptions and limitations of technologically-intensive development scenario of the region. Authors show that the main factor of structural changes in the fuel and energy balance will be the large-scale development of hydrocarbon resources in Baikal region. It will cause structural changes in the composition of both the debit and credit of FEB (namely the structure of export and final consumption of fuel and energy resources. Authors assume that the forecast structural changes of the region’s FEB will significantly improve energy efficiency parameters of the economy: energy capacity of GRP will decrease by 1,5 times in 2010– 2030, electro and heat capacity – 1,9 times; coefficients of useful utilization of fuel and energy resources will increase by 3–5 p.p. This will save about 20 million tons of fuel equivalent (about 210 billion rubles in 2011 the prices until 2030

  8. The exogenous factors affecting the cost efficiency of power generation

    International Nuclear Information System (INIS)

    Chang, D.-S.; Chen, Y.-T.; Chen, W.-D.

    2009-01-01

    This paper employs a stochastic frontier analysis (SFA) to examine cost efficiency and scale economies in Taiwan Power Company (TPC) by using the panel data covering the period of 1995-2006. In most previous studies, the efficiency estimated by the Panel Data without testing the endogeneity may bring about a biased estimator resulting from the correlation between input and individual effect. A Hausman test is conducted in this paper to examine the endogeneity of input variables and thus an appropriate model is selected based on the test result. This study finds that the power generation executes an increasing return to scale across all the power plants based on the pooled data. We also use installed capacity, service years of the power plant, and type of fuel as explanatory variable for accounting for the estimated cost efficiency of each plant by a logistic regression model to examine the factor affecting the individual efficiency estimates. The results demonstrate that the variable of installed capacity keeps a positive relationship with cost efficiency while the factor of working years has a negative relationship.

  9. Energy-efficient cooking methods

    Energy Technology Data Exchange (ETDEWEB)

    De, Dilip K. [Department of Physics, University of Jos, P.M.B. 2084, Jos, Plateau State (Nigeria); Muwa Shawhatsu, N. [Department of Physics, Federal University of Technology, Yola, P.M.B. 2076, Yola, Adamawa State (Nigeria); De, N.N. [Department of Mechanical and Aerospace Engineering, The University of Texas at Arlington, Arlington, TX 76019 (United States); Ikechukwu Ajaeroh, M. [Department of Physics, University of Abuja, Abuja (Nigeria)

    2013-02-15

    Energy-efficient new cooking techniques have been developed in this research. Using a stove with 649{+-}20 W of power, the minimum heat, specific heat of transformation, and on-stove time required to completely cook 1 kg of dry beans (with water and other ingredients) and 1 kg of raw potato are found to be: 710 {+-}kJ, 613 {+-}kJ, and 1,144{+-}10 s, respectively, for beans and 287{+-}12 kJ, 200{+-}9 kJ, and 466{+-}10 s for Irish potato. Extensive researches show that these figures are, to date, the lowest amount of heat ever used to cook beans and potato and less than half the energy used in conventional cooking with a pressure cooker. The efficiency of the stove was estimated to be 52.5{+-}2 %. Discussion is made to further improve the efficiency in cooking with normal stove and solar cooker and to save food nutrients further. Our method of cooking when applied globally is expected to contribute to the clean development management (CDM) potential. The approximate values of the minimum and maximum CDM potentials are estimated to be 7.5 x 10{sup 11} and 2.2 x 10{sup 13} kg of carbon credit annually. The precise estimation CDM potential of our cooking method will be reported later.

  10. A hydrogen production experiment by the thermo-chemical and electrolytic hybrid hydrogen production in lower temperature range. System viability and preliminary thermal efficiency estimation

    International Nuclear Information System (INIS)

    Takai, Toshihide; Nakagiri, Toshio; Inagaki, Yoshiyuki

    2008-10-01

    A new experimental apparatus by the thermo-chemical and electrolytic Hybrid-Hydrogen production in Lower Temperature range (HHLT) was developed and hydrogen production experiment was performed to confirm the system operability. Hydrogen production efficiency was estimated and technical problems were clarified through the experimental results. Stable operation of the SO 3 electrolysis cell and the sulfur dioxide solution electrolysis cell were confirmed during experimental operation and any damage which would be affected solid operation was not detected under post operation inspection. To improve hydrogen production efficiency, it was found that the reduction of sulfuric acid circulation and the decrease in the cell voltage were key issues. (author)

  11. Improving efficiency in stereology

    DEFF Research Database (Denmark)

    Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm

    2013-01-01

    of the study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned......, a proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....

  12. Distributed fusion estimation for sensor networks with communication constraints

    CERN Document Server

    Zhang, Wen-An; Song, Haiyu; Yu, Li

    2016-01-01

    This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.

  13. Coordination of Energy Efficiency and Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison

    2010-01-29

    This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.

  14. Oil pipeline energy consumption and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hooker, J.N.

    1981-01-01

    This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movements over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.

  15. Efficiency of Finish power transmission network companies

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The Finnish Energy Market Authority has investigated the efficiency of power transmissions network companies. The results show that the intensification potential of the branch is 402 million FIM, corresponding to about 15% of the total costs of the branch and 7.3 % of the turnout. Energy Market Authority supervises the reasonableness of the power transmission prices, and it will use the results of the research in supervision. The research was carried out by the Quantitative Methods Research Group of Helsinki School of Economics. The main objective of the research was to create an efficiency estimation method for electric power distribution network business used for Finnish conditions. Data of the year 1998 was used as basic material in the research. Twenty-one of the 102 power distribution network operators was estimated to be totally efficient. Highest possible efficiency rate was 100, and the average of the efficiency rates of all the operators was 76.9, the minimum being 42.6

  16. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  17. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  18. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  19. Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    International Nuclear Information System (INIS)

    Klasson, Niklas; Olsson, Erik; Rudemo, Mats; Eckerström, Carl; Malmgren, Helge; Wallin, Anders

    2015-01-01

    Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T 1 -weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five

  20. Assessment of the achieved savings from induction motors energy efficiency labeling in Brazil

    International Nuclear Information System (INIS)

    Bortoni, E.C.; Nogueira, L.A.H.; Cardoso, R.B.; Haddad, J.; Souza, E.P.; Dias, M.V.X.; Yamachita, R.A.

    2013-01-01

    Highlights: • We have modeled the influence of the increase of efficiency of motors. • The amount of saved energy is estimated. • The work deals with the “measurement” of a non-consumed energy. • The influence of the motor useful life is taken into account. • The influence of efficiency decrease along the motor life is also taken into account. - Abstract: Since 1995 Brazil has been applying its labeling program to increase the efficiency of application of many household appliances and equipment. From 2003 on inductions motors have also been receiving the PROCEL prize, which helped push motors efficiency over than those limits established by the labeling program. Therefore, this work presents the development of a model to estimate the amount of savings obtained with the usage of the PROCEL endorsement labels in standard and in energy efficient motors. The estimated peak demand reduction is also inferred. The developed model makes the usage of sales information and of a discard function to estimate the Brazilian motor stock. Approaches such as the use of efficiency loading and efficiency aging factors are employed to estimate motors consumption

  1.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  2. Estimation of Economic Efficiency of Regional Touristic Complex

    Directory of Open Access Journals (Sweden)

    Kurchenkov Vladimir Viktorovich

    2015-09-01

    Full Text Available The article describes the features of the development of the regional touristic complex in modern conditions and determines the direction of realizing the potential of the regional market of tourist services. The authors reveal the multiplicative interrelation for analyzing the interaction of the primary and secondary sectors of the regional market of tourist services. The key indicators of efficiency are outlined, the extent of their relevance for assessing the potential of international tourism in the region is revealed. The authors calculate the relative indicators reflecting the dynamics of incomes from inbound, outbound and domestic tourism in relation to the total income from tourism activities in the region during the reporting period, usually for one calendar year. On the basis of these parameters, the classification of the regions of the Southern Federal District in terms of tourist attraction is carried out. The authors determine the reasons of the low tourist attractiveness of the Volgograd region in comparison with other regions of the Southern Federal District. It is substantiated that the potential of expanding tourism activity is not fully realized today in the Volgograd region. The technique of analysis and evaluation of the effectiveness of regional touristic complex on the basis of cluster approach is suggested. For analyzing the effectiveness of regional tourism cluster the authors propose to use indicators that reflect the overall performance of the regional tourism cluster, characterizing the impact of cluster development of the area, or the regional market, as well as evaluating the performance of each of the companies cooperating in the framework of the cluster. The article contains recommendations to the regional authorities on improving the efficiency of regional touristic complex in the short- and long-term prospects.

  3. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Motor-operated gearbox efficiency

    International Nuclear Information System (INIS)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D.; Weidenhamer, G.H.

    1996-01-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer

  5. Motor-operated gearbox efficiency

    Energy Technology Data Exchange (ETDEWEB)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.

    1996-12-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.

  6. Motor-operator gearbox efficiency

    International Nuclear Information System (INIS)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D.

    1996-01-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, we compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators we tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer

  7. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  8. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  9. EPA’s Travel Efficiency Method (TEAM) AMPO Presentation

    Science.gov (United States)

    Presentation describes EPA’s Travel Efficiency Assessment Method (TEAM) assessing potential travel efficiency strategies for reducing travel activity and emissions, includes reduction estimates in Vehicle Miles Traveled in four different geographic areas.

  10. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad; Valstar, Johan R.; Hoteit, Ibrahim

    2014-01-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  11. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.

  12. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama

    2015-04-23

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  13. Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach

    KAUST Repository

    Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia

    2015-01-01

    In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.

  14. Efficiency gains, bounds, and risk in finance

    NARCIS (Netherlands)

    Sarisoy, Cisil

    2015-01-01

    This thesis consists of three chapters. The first chapter analyzes efficiency gains in the estimation of expected returns based on asset pricing models and examines the economic implications of such gains in portfolio allocation exercises. The second chapter provides nonparametric efficiency bounds

  15. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  16. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  17. The efficiency of aerodynamic force production in Drosophila.

    Science.gov (United States)

    Lehmann, F O

    2001-12-01

    Total efficiency of aerodynamic force production in insect flight depends on both the efficiency with which flight muscles turn metabolic energy into muscle mechanical power and the efficiency with which this power is converted into aerodynamic flight force by the flapping wings. Total efficiency has been estimated in tethered flying fruit flies Drosophila by modulating their power expenditures in a virtual reality flight simulator while simultaneously measuring stroke kinematics, locomotor performance and metabolic costs. During flight, muscle efficiency increases with increasing flight force production, whereas aerodynamic efficiency of lift production decreases with increasing forces. As a consequence of these opposite trends, total flight efficiency in Drosophila remains approximately constant within the kinematic working range of the flight motor. Total efficiency is broadly independent of different profile power estimates and typically amounts to 2-3%. The animal achieves maximum total efficiency near hovering flight conditions, when the beating wings produce flight forces that are equal to the body weight of the insect. It remains uncertain whether this small advantage in total efficiency during hovering flight was shaped by evolutionary factors or results from functional constraints on both the production of mechanical power by the indirect flight muscles and the unsteady aerodynamic mechanisms in flapping flight.

  18. Efficiency of clinical and combined diagnosis of breast cancer

    International Nuclear Information System (INIS)

    Solov'ev, I.E.

    1986-01-01

    Problems on clinical, instrumental, laboratory diagnosis of mammary glands cancer are described. Efficiency of clinical examination, mammography, cytological examination, ultrasonic, radioisotopic diagnosis, some biochemical tests are estimated. The conclusion concerning advisability of complex diagnosis of mammary glands cancer especially its early forms is made. Perspectivity of application of polyamine test in diagnosis of primary cancer of the mammary gland is mark to estimate efficiency of its treatment

  19. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  20. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  1. Estimating the energy use of high definition games consoles

    International Nuclear Information System (INIS)

    Webb, A.; Mayers, K.; France, C.; Koomey, J.

    2013-01-01

    As the energy use of games consoles has risen, due to increased ownership and use and improved performance and functionality, various governments have shown an interest in ways to improve their energy efficiency. Estimates of console energy use vary widely between 32 and 500 kWh/year. Most such estimates are unreliable as they are based on incorrect assumptions and unrepresentative data. To address the shortcomings of existing estimates of console energy use, this study collates, normalises and analyses available data for power consumption and usage. The results show that the average energy use of high definition games consoles (sold between 2005 and 2011 inclusive) can be estimated at 102 kWh/year, and 64 kWh/year for new console models on sale in early 2012. The calculations herein provide representative estimates of console energy use during this period, including a breakdown of the relative contribution of different usage modes. These results could be used as a baseline to evaluate the potential energy savings from efficiency improvements in games consoles, and also to assess the potential effectiveness of any proposed energy efficiency standards. Use of accurate data will help ensure the implementation of the most effective efficiency policies and standards. - Highlights: • Estimates of games console energy use vary significantly. • New energy use estimates calculated for high definition games consoles. • Consoles currently on sale use 37% less energy than earlier models. • Gaming accounts for over 50% of console energy use. • Further research regarding console usage is needed, particularly inactive time

  2. On Estimating Quantiles Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Berger Yves G.

    2015-03-01

    Full Text Available We propose a transformation-based approach for estimating quantiles using auxiliary information. The proposed estimators can be easily implemented using a regression estimator. We show that the proposed estimators are consistent and asymptotically unbiased. The main advantage of the proposed estimators is their simplicity. Despite the fact the proposed estimators are not necessarily more efficient than their competitors, they offer a good compromise between accuracy and simplicity. They can be used under single and multistage sampling designs with unequal selection probabilities. A simulation study supports our finding and shows that the proposed estimators are robust and of an acceptable accuracy compared to alternative estimators, which can be more computationally intensive.

  3. Male pre- and post-pubertal castration effect on live weight, components of empty body weight, estimated nitrogen excretion and efficiency in Piemontese hypertrofic cattle

    Directory of Open Access Journals (Sweden)

    Davide Biagini

    2011-04-01

    Full Text Available To evaluate the effect of sexual neutering and age of castration on empty body weight (EBW components and estimated nitrogen excretion and efficiency, a trial was carried out on 3 groups of double-muscled Piemontese calves: early castrated (EC, 5th month of age, late castrated (LC, 12th month of age and intact males (IM, control group. Animals were fed at the same energy and protein level and slaughtered at 18th month of age. Live and slaughtering performances and EBW components were recorded, whereas N excretion was calculated by difference between diet and weight gain N content. In live and slaughtering performances, IM showed higher final, carcass and total meat weight than EC and LC (P<0.01. In EBW components, IM showed higher blood and head weight than EC and LC (P<0.01 and 0.05 respectively, and differences were found between EC and LC for head weights (P<0.01. IM showed higher body crude protein (BCP than EC and LC (P<0.01 and 0.05 respectively, but BCP/EBW ratio was higher only in IM than EC (P<0.05. Estimated N daily gain was higher in IM than EC and LC (P<0.01. Only LC showed higher excretion than IM (P<0.05, and N efficiency was higher in IM than EC and LC (P<0.05 and 0.01 respectively. In conclusion, for the Piemontese hypertrophied cattle castration significantly increases N excretion (+7% and reduces N efficiency (-15%, leading to a lower level of sustainability.

  4. Estimation of greenhouse gas (GHG) emission and energy use efficiency (EUE) analysis in rainfed canola production (case study: Golestan province, Iran)

    International Nuclear Information System (INIS)

    Kazemi, Hossein; Bourkheili, Saeid Hassanpour; Kamkar, Behnam; Soltani, Afshin; Gharanjic, Kambiz; Nazari, Noor Mohammad

    2016-01-01

    Increasing the use of energy inputs in agricultural section has been led to numerous environmental concerns such as greenhouse gas (GHG) emissions, high consumption of non-renewable resources, loss of biodiversity and environment pollutions. The study was aimed to analyze the energy use efficiency (EUE) and estimation of GHG emissions from rainfed–based canola production systems (RCPSs) in Iran. In this study, data were collected from 35 farms in Golestan province (northeast of Iran) by a face to face questionnaire performed and statistical yearbooks of 2014. The amount of GHG emissions (per hectare) from inputs used in RCPSs was calculated using CO 2 emissions coefficient of agricultural inputs. Results showed that the EUE and net energy (NE) were as 3.44 and 35,537.81 MJ ha −1 , respectively. The value of these indices for the study area indicated that surveyed fields are approximately efficient in the use of energy for canola production. The highest share of energy consumption belonged to nitrogen fertilizer (42.09%) followed by diesel fuel (39.81%). In production of rainfed canola, GHG emission was estimated as 1009.91 kg CO 2 equivalent per hectare. Based on the results, nitrogen fertilizer (44.15%), diesel fuel (30.16%) and machinery (14.49%) for field operations had the highest share of GHG emission. The total consumed energy by inputs could be classified as direct energy (40.09%), and indirect energy (59.91%) or renewable energy (2.02%) and nonrenewable energy (97.98%). These results demonstrate that the share of renewable energies in canola production is very low in the studied region and agriculture in Iran is very much dependent on non-renewable energies. In this study, the energy use status in RCPSs has analyzed and the main involved causes have been interpreted. - Highlights: • Fertilizers had the highest share in GHG emission. • The share of renewable energy was low in canola production. • Canola production is efficient in Iran.

  5. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    Science.gov (United States)

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  6. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  7. An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels

    Directory of Open Access Journals (Sweden)

    Jian Li

    2005-04-01

    Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.

  8. Efficient spectral estimation by MUSIC and ESPRIT with application to sparse FFT

    Directory of Open Access Journals (Sweden)

    Daniel ePotts

    2016-02-01

    Full Text Available In spectral estimation, one has to determine all parameters of an exponential sum for finitely many (noisysampled data of this exponential sum.Frequently used methods for spectral estimation are MUSIC (MUltiple SIgnal Classification and ESPRIT (Estimation of Signal Parameters viaRotational Invariance Technique.For a trigonometric polynomial of large sparsity, we present a new sparse fast Fourier transform byshifted sampling and using MUSIC resp. ESPRIT, where the ESPRIT based method has lower computational cost.Later this technique is extended to a new reconstruction of a multivariate trigonometric polynomial of large sparsity for given (noisy values sampled on a reconstructing rank-1 lattice. Numerical experiments illustrate thehigh performance of these procedures.

  9. DETERMINING EFFICIENCY OF INVESTMENT BANKS AFTERFINANCIALCRISISBY BOOTSTRAPDATA ENVELOPMENTANALYSIS (BDEA :A CASE OF TURKEY

    Directory of Open Access Journals (Sweden)

    Funda H. Sezgin

    2012-01-01

    Full Text Available Data Envelopment Analysis (DEA is a mathematical programming formulationbased technique that provides an efficient frontier to suggest an estimate of therelative efficiency of each decision making unit (DMU in a problem set. DEA isdeveloped around the concept of evaluating the efficiency of a decision alternativebased on its performance of creating outputs in means of input consumption.Besides its advantages,criticisms about the potential bias of efficiency estimatesof DEA has been arised. One criticism about DEA is on the sampling variation ofthe estimated frontier which may affect the accuracy of results. The bootstrapmethod is a statistical resampling method used to perform inference complexproblems. The basic idea of the bootstrap method is to approximate the samplingdistributions of the estimator by using the empirical distribution of resampledestimates obtained from a Monte Carlo resampling. DEA estimators introducedan approach based on “bootstrap techniques” to correct and estimate the bias ofthe DEA efficiency indicators.The purpose of this study is to measure theefficiency of small amount ofinvestment banksin Turkeyafter thefinancialcrisis in 2010with theBootstrapDEA(BDEA.

  10. Superefficient Refrigerators: Opportunities and Challenges for Efficiency Improvement Globally

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Nihar; Park, Won Young; Bojda, Nicholas; McNeil, Michael A.

    2014-08-01

    As an energy-intensive mainstream product, residential refrigerators present a significant opportunity to reduce electricity consumption through energy efficiency improvements. Refrigerators expend a considerable amount of electricity during normal use, typically consuming between 100 to 1,000 kWh of electricity per annum. This paper presents the results of a technical analysis done for refrigerators in support of the Super-efficient Equipment and Appliance Deployment (SEAD) initiative. Beginning from a base case representative of the average unit sold in India, we analyze efficiency improvement options and their corresponding costs to build a cost-versus-efficiency relationship. We then consider design improvement options that are known to be the most cost effective and that can improve efficiency given current design configurations. We also analyze and present additional super-efficient options, such as vacuum-insulated panels. We estimate the cost of conserved electricity for the various options, allowing flexible program design for market transformation programs toward higher efficiency. We estimate ~;;160TWh/year of energy savings are cost effective in 2030, indicating significant potential for efficiency improvement in refrigerators in SEAD economies and China.

  11. COST EFFICIENCY LEVEL OF RURAL BANKS IN EAST JAVA

    Directory of Open Access Journals (Sweden)

    Abdul Mongid

    2017-03-01

    Full Text Available Abstract: Rural Bank (BPR was an important part of financial service industry in Indonesia.Their pivotal role on lending to SMEs in the rural area made their existence very strategic torural development. However, due to its operational scale, rural bank charged higher interestrate than commercial bank. The study estimated the cost efficiency of rural banks usingparametric approach. The result found that rural bank efficiency was very high. The two yearcost efficiency estimated using frontier 4.1 was 95% and median was 100%. The lowest of costefficiency level was 32%. It meant cost inefficiency of the banks under investigated was around10%. The cost efficiency level in 2006 was on average 95% and the median was 100%. It meantthat 50% or more of the observation enjoyed 100% cost efficiency. The minimum was only67%. It meant they operated at very efficient level, leaving only 5% inefficiency. In 2007, adramatic change on efficiency level was going on. The average efficiency was dropped from11% to 89.9% due to increase on interest rate and price level.

  12. Effects of heterogeneity on bank efficiency scores

    NARCIS (Netherlands)

    Bos, J. W. B.; Koetter, M.; Kolari, J. W.; Kool, C. J. M.

    2009-01-01

    Bank efficiency estimates often serve as a proxy of managerial skill since they quantify sub-optimal production choices. But such deviations can also be due to omitted systematic differences among banks. In this study, we examine the effects of heterogeneity on bank efficiency scores. We compare

  13. Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation

    Directory of Open Access Journals (Sweden)

    M. Agatonović

    2012-12-01

    Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.

  14. Interactive inverse kinematics for human motion estimation

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome

    2009-01-01

    We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion...... to significantly speed up the particle filtering. It should be stressed that the observation part of the system has not been our focus, and as such is described only from a sense of completeness. With our approach it is possible to construct a robust and computationally efficient system for human motion estimation....

  15. Gasoline taxes or efficiency standards? A heterogeneous household demand analysis

    International Nuclear Information System (INIS)

    Liu, Weiwei

    2015-01-01

    Using detailed consumer expenditure survey data and a flexible semiparametric dynamic demand model, this paper estimates the price elasticity and fuel efficiency elasticity of gasoline demand at the household level. The goal is to assess the effectiveness of gasoline taxes and vehicle fuel efficiency standards on fuel consumption. The results reveal substantial interaction between vehicle fuel efficiency and the price elasticity of gasoline demand: the improvement of vehicle fuel efficiency leads to lower price elasticity and weakens consumers’ sensitivity to gasoline price changes. The offsetting effect also differs across households due to demographic heterogeneity. These findings imply that when gasoline taxes are in place, tightening efficiency standards will partially offset the strength of taxes on reducing fuel consumption. - Highlights: • Model household gasoline demand using a semiparametric approach. • Estimate heterogeneous price elasticity and fuel efficiency elasticity. • Assess the effectiveness of gasoline taxes and efficiency standards. • Efficiency standards offset the impact of gasoline taxes on fuel consumption. • The offsetting effect differs by household demographics

  16. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2002-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent

  17. Ownership and technical efficiency of hospitals: evidence from Ghana using data envelopment analysis.

    Science.gov (United States)

    Jehu-Appiah, Caroline; Sekidde, Serufusa; Adjuik, Martin; Akazili, James; Almeida, Selassi D; Nyonator, Frank; Baltussen, Rob; Asbu, Eyob Zere; Kirigia, Joses Muthuri

    2014-04-08

    In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%.Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals.Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%).From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. It would be prudent for policy-makers to examine the

  18. Theoretical and observational assessments of flare efficiencies

    International Nuclear Information System (INIS)

    Leahey, D.M.; Preston, K.; Strosher, M.

    2000-01-01

    During the processing of hydrocarbon materials, gaseous wastes are flared in an effort to completely burn the waste material and therefore leave behind very little by-products. Complete combustion, however is rarely successful because entrainment of air into the region of combusting gases restricts flame sizes to less than optimum values. The resulting flames are often too small to dissipate the amount of heat associated with complete (100 per cent) combustion efficiency. Flaring, therefore, often results in emissions of gases with more complex molecular structures than just carbon dioxide and water. Polycyclic aromatic hydrocarbons and volatile organic compounds which are indicative of incomplete combustion are often associated with flaring. This theoretical study of flame efficiencies was based on the knowledge of the full range of chemical reactions and associated kinetics. In this study, equations developed by Leahey and Schroeder were used to estimate flame lengths, areas and volumes as functions of flare stack exit velocity, stoichiometric mixing ratio and wind speed. This was followed by an estimate of heats released as part of the combustion process. This was derived from the knowledge of the flame dimensions together with an assumed flame temperature of 1200 K. Combustion efficiencies were then obtained by taking the ratio of estimated actual heat release values to those associated with complete combustion. It was concluded that combustion efficiency decreases significantly with wind speed increases from 1 to 6 m/s. After that initial increase, combustion efficiencies level off at values between 10 to 15 per cent. Propane and ethane were found to burn more efficiently than methane or hydrogen sulfide. 24 refs., 4 tabs., 1 fig., 1 append

  19. Estimating the Cross-Shelf Export of Riverine Materials: Part 2. Estimates of Global Freshwater and Nutrient Export

    Science.gov (United States)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of fresh water, nutrients, and other terrestrially derived materials to the coastal ocean. Where inputs accumulate on the shelf, harmful effects such as hypoxia and eutrophication can result. In contrast, where export to the open ocean is efficient riverine inputs contribute to global biogeochemical budgets. Assessing the fate of riverine inputs is difficult on a global scale. Global ocean models are generally too coarse to resolve the relatively small scale features of river plumes. High-resolution regional models have been developed for individual river plume systems, but it is impractical to apply this approach globally to all rivers. Recently, generalized parameterizations have been proposed to estimate the export of riverine fresh water to the open ocean (Izett & Fennel, 2018, https://doi.org/10.1002/2017GB005667; Sharples et al., 2017, https://doi.org/10.1002/2016GB005483). Here the relationships of Izett and Fennel, https://doi.org/10.1002/2017GB005667 are used to derive global estimates of open-ocean export of fresh water and dissolved inorganic silicate, dissolved organic carbon, and dissolved organic and inorganic phosphorus and nitrogen. We estimate that only 15-53% of riverine fresh water reaches the open ocean directly in river plumes; nutrient export is even less efficient because of processing on continental shelves. Due to geographic differences in riverine nutrient delivery, dissolved silicate is the most efficiently exported to the open ocean (7-56.7%), while dissolved inorganic nitrogen is the least efficiently exported (2.8-44.3%). These results are consistent with previous estimates and provide a simple way to parameterize export to the open ocean in global models.

  20. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2001-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this

  1. Efficiency snakes and energy ladders: A (meta-)frontier demand analysis of electricity consumption efficiency in Chinese households

    International Nuclear Information System (INIS)

    Broadstock, David C.; Li, Jiajia; Zhang, Dayong

    2016-01-01

    Policy makers presently lack access to quantified estimates – and hence an explicit understanding – of energy consumption efficiency within households, creating a potential gap between true efficiency levels and the necessarily assumed efficiency levels that policy makers adopt in designing and implementing energy policy. This paper attempts to fill this information gap by empirically quantifying electricity consumption efficiency for a sample of more than 7,000 households. Adopting the recently introduced ‘frontier demand function’ due to Filippini and Hunt (2011) but extending it into the metafrontier context – to control for structural heterogeneity arising from location type – it is shown that consumption efficiency is little more than 60% on average. This implies huge potential for energy reduction via the expansion of schemes to promote energy efficiency. City households, which are the wealthiest in the sample, are shown to define the metafrontier demand function (and hence have the potential to be the most efficient households), but at the same time exhibit the largest inefficiencies. These facts together allow for a potential refinement on the household energy ladder concept, suggesting that wealth affords access to the best technologies thereby increasing potential energy efficiency (the ‘traditional view of the household energy ladder), but complementary to this these same households are most inefficient. This has implications for numerous areas of policy, including for example the design of energy assistance schemes, identification of energy education needs/priorities as well more refined setting of subsidies/tax-credit policies. - Highlights: •Frontier demand functions are estimated for a sample of 7102 Chinese households. •Metafrontier methods capture heterogeneity arising from urban form (e.g. cities, towns and villages). •Wealthier houses have higher efficiency potential, but are in fact less efficient in their consumption of

  2. Estimation of biochemical variables using quantumbehaved particle ...

    African Journals Online (AJOL)

    To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...

  3. Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations

    Science.gov (United States)

    Choudhury, Bhaskar J.; Quick, B. E.

    2003-01-01

    Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.

  4. Estimating Diurnal Courses of Gross Primary Production for Maize: A Comparison of Sun-Induced Chlorophyll Fluorescence, Light-Use Efficiency and Process-Based Models

    Directory of Open Access Journals (Sweden)

    Tianxiang Cui

    2017-12-01

    Full Text Available Accurately quantifying gross primary production (GPP is of vital importance to understanding the global carbon cycle. Light-use efficiency (LUE models and process-based models have been widely used to estimate GPP at different spatial and temporal scales. However, large uncertainties remain in quantifying GPP, especially for croplands. Recently, remote measurements of solar-induced chlorophyll fluorescence (SIF have provided a new perspective to assess actual levels of plant photosynthesis. In the presented study, we evaluated the performance of three approaches, including the LUE-based multi-source data synergized quantitative (MuSyQ GPP algorithm, the process-based boreal ecosystem productivity simulator (BEPS model, and the SIF-based statistical model, in estimating the diurnal courses of GPP at a maize site in Zhangye, China. A field campaign was conducted to acquire synchronous far-red SIF (SIF760 observations and flux tower-based GPP measurements. Our results showed that both SIF760 and GPP were linearly correlated with APAR, and the SIF760-GPP relationship was adequately characterized using a linear function. The evaluation of the modeled GPP against the GPP measured from the tower demonstrated that all three approaches provided reasonable estimates, with R2 values of 0.702, 0.867, and 0.667 and RMSE values of 0.247, 0.153, and 0.236 mg m−2 s−1 for the MuSyQ-GPP, BEPS and SIF models, respectively. This study indicated that the BEPS model simulated the GPP best due to its efficiency in describing the underlying physiological processes of sunlit and shaded leaves. The MuSyQ-GPP model was limited by its simplification of some critical ecological processes and its weakness in characterizing the contribution of shaded leaves. The SIF760-based model demonstrated a relatively limited accuracy but showed its potential in modeling GPP without dependency on climate inputs in short-term studies.

  5. Joint Sparsity and Frequency Estimation for Spectral Compressive Sensing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    various interpolation techniques to estimate the continuous frequency parameters. In this paper, we show that solving the problem in a probabilistic framework instead produces an asymptotically efficient estimator which outperforms existing methods in terms of estimation accuracy while still having a low...

  6. Towards Remote Estimation of Radiation Use Efficiency in Maize Using UAV-Based Low-Cost Camera Imagery

    Directory of Open Access Journals (Sweden)

    Andreas Tewes

    2018-02-01

    Full Text Available Radiation Use Efficiency (RUE defines the productivity with which absorbed photosynthetically active radiation (APAR is converted to plant biomass. Readily used in crop growth models to predict dry matter accumulation, RUE is commonly determined by elaborate static sensor measurements in the field. Different definitions are used, based on total absorbed PAR (RUEtotal or PAR absorbed by the photosynthetically active leaf tissue only (RUEgreen. Previous studies have shown that the fraction of PAR absorbed (fAPAR, which supports the assessment of RUE, can be reliably estimated via remote sensing (RS, but unfortunately at spatial resolutions too coarse for experimental agriculture. UAV-based RS offers the possibility to cover plant reflectance at very high spatial and temporal resolution, possibly covering several experimental plots in little time. We investigated if (a UAV-based low-cost camera imagery allowed estimating RUEs in different experimental plots where maize was cultivated in the growing season of 2016, (b those values were different from the ones previously reported in literature and (c there was a difference between RUEtotal and RUEgreen. We determined fractional cover and canopy reflectance based on the RS imagery. Our study found that RUEtotal ranges between 4.05 and 4.59, and RUEgreen between 4.11 and 4.65. These values are higher than those published in other research articles, but not outside the range of plausibility. The difference between RUEtotal and RUEgreen was minimal, possibly due to prolonged canopy greenness induced by the stay-green trait of the cultivar grown. The procedure presented here makes time-consuming APAR measurements for determining RUE especially in large experiments superfluous.

  7. An evaluation of accounting-based finding costs as efficiency measures for oil and gas exploration

    International Nuclear Information System (INIS)

    Boynton, C.E. IV; Boone, J.P.

    1994-08-01

    The authors have operationalized firm-specific exploration efficiency as the difference between a firm-specific intercept estimated in a fixed-effects panel data Cobb-Douglas production frontier model and the maximum firm-specific intercept estimated in that model. The production model was estimated during two different time periods, 1982--1985 and 1989--1992, allowing efficiency to vary intertemporally. This efficiency estimate served as a benchmark against which they compared various measures of inverse finding costs. They assumed that the degree of association with an efficiency benchmark is an important attribute of any finding cost measure and that, further, the degree of association may be used as a metric for choosing between alternative finding cost measures. Accordingly, they evaluated the cross-sectional statistical association between estimated efficiency and alternative inverse finding cost measures. They discovered that the inverse finding cost measure that exhibited the strongest association with efficiency during the two time periods was a three-year moving-average finding cost which included exploration plus development expenditures as costs and reserve extensions and additions plus revisions as the units added

  8. An evaluation of accounting-based finding costs as efficiency measures for oil and gas exploration

    Energy Technology Data Exchange (ETDEWEB)

    Boynton, C.E. IV; Boone, J.P.

    1994-08-01

    The authors have operationalized firm-specific exploration efficiency as the difference between a firm-specific intercept estimated in a fixed-effects panel data Cobb-Douglas production frontier model and the maximum firm-specific intercept estimated in that model. The production model was estimated during two different time periods, 1982--1985 and 1989--1992, allowing efficiency to vary intertemporally. This efficiency estimate served as a benchmark against which they compared various measures of inverse finding costs. They assumed that the degree of association with an efficiency benchmark is an important attribute of any finding cost measure and that, further, the degree of association may be used as a metric for choosing between alternative finding cost measures. Accordingly, they evaluated the cross-sectional statistical association between estimated efficiency and alternative inverse finding cost measures. They discovered that the inverse finding cost measure that exhibited the strongest association with efficiency during the two time periods was a three-year moving-average finding cost which included exploration plus development expenditures as costs and reserve extensions and additions plus revisions as the units added.

  9. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  10. Pollutant Flux Estimation in an Estuary Comparison between Model and Field Measurements

    Directory of Open Access Journals (Sweden)

    Yen-Chang Chen

    2014-08-01

    Full Text Available This study proposes a framework for estimating pollutant flux in an estuary. An efficient method is applied to estimate the flux of pollutants in an estuary. A gauging station network in the Danshui River estuary is established to measure the data of water quality and discharge based on the efficient method. A boat mounted with an acoustic Doppler profiler (ADP traverses the river along a preselected path that is normal to the streamflow to measure the velocities, water depths and water quality for calculating pollutant flux. To know the characteristics of the estuary and to provide the basis for the pollutant flux estimation model, data of complete tidal cycles is collected. The discharge estimation model applies the maximum velocity and water level to estimate mean velocity and cross-sectional area, respectively. Thus, the pollutant flux of the estuary can be easily computed as the product of the mean velocity, cross-sectional area and pollutant concentration. The good agreement between the observed and estimated pollutant flux of the Danshui River estuary shows that the pollutant measured by the conventional and the efficient methods are not fundamentally different. The proposed method is cost-effective and reliable. It can be used to estimate pollutant flux in an estuary accurately and efficiently.

  11. Chapter 21: Estimating Net Savings - Common Practices. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)

    2017-11-02

    This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.

  12. INDICATORS OF EFFICIENCY OF THE PILOTLESS AVIATION COMPLEX

    Directory of Open Access Journals (Sweden)

    A. S. Benkafo

    2014-01-01

    Full Text Available The general principles of an estimation of efficiency of application of pilotless aviation complexes are considered at monitoring of a terrestrial surface in the conditions of presence of unauthenticity of the information on the basis of mathematical  modelling  with  the  account  of  hierarchical  construction  and  influence of  the  human  factor.  The substantiation of indicators of efficiency of information system and likelihood characteristics of an estimation of the information necessary for decision-making is spent.

  13. Efficient estimation of the robustness region of biological models with oscillatory behavior.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.

  14. Application of the control variate technique to estimation of total sensitivity indices

    International Nuclear Information System (INIS)

    Kucherenko, S.; Delpuech, B.; Iooss, B.; Tarantola, S.

    2015-01-01

    Global sensitivity analysis is widely used in many areas of science, biology, sociology and policy planning. The variance-based methods also known as Sobol' sensitivity indices has become the method of choice among practitioners due to its efficiency and ease of interpretation. For complex practical problems, estimation of Sobol' sensitivity indices generally requires a large number of function evaluations to achieve reasonable convergence. To improve the efficiency of the Monte Carlo estimates for the Sobol' total sensitivity indices we apply the control variate reduction technique and develop a new formula for evaluation of total sensitivity indices. Presented results using well known test functions show the efficiency of the developed technique. - Highlights: • We analyse the efficiency of the Monte Carlo estimates of Sobol' sensitivity indices. • The control variate technique is applied for estimation of total sensitivity indices. • We develop a new formula for evaluation of Sobol' total sensitivity indices. • We present test results demonstrating the high efficiency of the developed formula

  15. Estimating photosynthetic radiation use efficiency using incident light and photosynthesis of individual leaves.

    Science.gov (United States)

    Rosati, A; Dejong, T M

    2003-06-01

    It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, 'daily' photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthesis was estimated from the photosynthetic responses to photosynthetically active radiation (PAR) and from the incident PAR measured on individual leaves during clear and overcast days. Plants were grown with either abundant or scarce N fertilization. Both net and gross daily photosynthesis of leaves were linearly related to daily incident PAR exposure of individual leaves, which implies constant PhRUE over a day throughout the canopy. The slope of these relationships (i.e. PhRUE) increased with N fertilization. When the relationship was calculated for hourly instead of daily periods, the regressions were curvilinear, implying that PhRUE changed with time of the day and incident radiation. Thus, linearity (i.e. constant PhRUE) was achieved only when data were integrated over the entire day. Using average PAR in place of instantaneous incident PAR increased the slope of the relationship between daily photosynthesis and incident PAR of individual leaves, and the regression became curvilinear. The slope of the relationship between daily gross photosynthesis and incident PAR of individual leaves increased for an overcast compared with a clear day, but the slope remained constant for net photosynthesis. This suggests that net PhRUE of all leaves (and thus of the whole canopy) may be constant when integrated over a day, not only when the incident PAR changes with depth in the canopy, but also when it varies on the same leaf owing to changes in daily incident PAR above the canopy. The

  16. Efficiency improvement opportunities in TVs: Implications for market transformation programs

    International Nuclear Information System (INIS)

    Park, Won Young; Phadke, Amol; Shah, Nihar; Letschert, Virginie

    2013-01-01

    Televisions (TVs) account for a significant portion of residential electricity consumption and global TV shipments are expected to continue to increase. We assess the market trends in the energy efficiency of TVs that are likely to occur without any additional policy intervention and estimate that TV efficiency will likely improve by over 60% by 2015 with savings potential of 45 terawatt-hours [TW h] per year in 2015, compared to today’s technology. We discuss various energy-efficiency improvement options and evaluate the cost effectiveness of three of them. At least one of these options improves efficiency by at least 20% cost effectively beyond ongoing market trends. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to further capture global energy savings potential from TVs which we estimate to be up to 23 TW h per year in 2015. - Highlights: • We analyze the impact of the recent TV market transition on TV energy consumption. • We review TV technology options that could be realized in the near future. • We assess the cost-effectiveness of selected energy-efficiency improvement options. • We estimate global electricity savings potential in selected scenarios. • We discuss possible directions of market transformation programs

  17. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  18. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  19. Damping Estimation of Friction Systems in Random Vibrations

    DEFF Research Database (Denmark)

    Friis, Tobias; Katsanos, Evangelos; Amador, Sandro

    Friction is one of the most efficient and economical mechanisms to reduce vibrations in structural mechanics. However, the estimation of the equivalent linear damping of the friction damped systems in experimental modal analysis and operational modal analysis can be adversely affected by several...... assumptions regarding the definition of the linear damping and the identification methods or may be lacking a meaningful interpretation of the damping. Along these lines, this project focuses on assessing the potential to estimate efficiently the equivalent linear damping of friction systems in random...

  20. Wastewater treatment facilities: Energy efficient improvements and cogeneration

    International Nuclear Information System (INIS)

    Kunkle, R.; Gray, R.; Delzel, D.

    1992-10-01

    The Washington State Energy Office (WSEO) has worked with both the Bonneville Power Administration (BPA) and the US Department of Energy to provide technical and financial assistance to local governments. Based on a recent study conducted by Ecotope for WSEO, local governments spend an estimated $45 million on utility bills statewide. Water and wastewater facilities account for almost a third of this cost. As a result, WSEO decided to focus its efforts on the energy intensive water and wastewater sector. The ultimate goal of this project was to develop mechanisms to incorporate energy efficiency improvements into wastewater treatment facilities in retrofits and during upgrades, remodels, and new construction. Project activities included the following: The review of the existing regulatory environment for treatment system construction, A summary of financing options for efficiency improvements in treatment facilities, A literature review of energy efficiency opportunities in treatment plants, Survey and site visits to characterize existing facilities in Washington State, Estimates of the energy efficiency and cogeneration potential in the sector, and A case study to illustrate the implementation of an efficiency improvement in a treatment facility

  1. Improving primary health care facility performance in Ghana: efficiency analysis and fiscal space implications.

    Science.gov (United States)

    Novignon, Jacob; Nonvignon, Justice

    2017-06-12

    Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private

  2. Estimation of stochastic environment force for master–slave robotic ...

    Indian Academy of Sciences (India)

    Neelu Nagpal

    Subsequently, convergence analysis of error in the estimates is performed. Also, an expression of ... nonlinear and composite adaptive controller [7, 9] and disturbance ... block processing method and acts as an efficient estimator since this estimation ...... 0949-2. [32] Smith L 2006 Sequential Monte Carlo particle filtering for.

  3. Lambda-Lifting in Quadratic Time

    DEFF Research Database (Denmark)

    Danvy, Olivier; Schultz, Ulrik Pagh

    2002-01-01

    Lambda-lifting is a program transformation that is used in compilers, partial evaluators, and program transformers. In this article, we show how to reduce its complexity from cubic time to quadratic time, and we present a flow-sensitive lambda-lifter that also works in quadratic time. Lambda-lifting...... that yields the cubic factor in the traditional formulation of lambda-lifting, which is due to Johnsson. This search is carried out by computing a transitive closure. To reduce the complexity of lambda-lifting, we partition the call graph of the source program into strongly connected components, based...... of lambda-lifting from O(n^3) to O(n^2) . where n is the size of the program. Since a lambda-lifter can output programs of size O(n^2), our algorithm is asympotically optimal....

  4. Lambda-Lifting in Quadratic Time

    DEFF Research Database (Denmark)

    Danvy, Olivier; Schultz, Ulrik Pagh

    2003-01-01

    Lambda-lifting is a program transformation that is used in compilers, partial evaluators, and program transformers. In this article, we show how to reduce its complexity from cubic time to quadratic time, and we present a flow-sensitive lambda-lifter that also works in quadratic time. Lambda-lifting...... that yields the cubic factor in the traditional formulation of lambda-lifting, which is due to Johnsson. This search is carried out by computing a transitive closure. To reduce the complexity of lambda-lifting, we partition the call graph of the source program into strongly connected components, based...... of lambda-lifting from O(n^3) to O(n^2) . where n is the size of the program. Since a lambda-lifter can output programs of size O(n^2), our algorithm is asympotically optimal....

  5. Lambda-Lifting in Quadratic Time

    DEFF Research Database (Denmark)

    Danvy, Olivier; Schultz, Ulrik Pagh

    2004-01-01

    Lambda-lifting is a program transformation that is used in compilers, partial evaluators, and program transformers. In this article, we show how to reduce its complexity from cubic time to quadratic time, and we present a flow-sensitive lambda-lifter that also works in quadratic time. Lambda-lifting...... that yields the cubic factor in the traditional formulation of lambda-lifting, which is due to Johnsson. This search is carried out by computing a transitive closure. To reduce the complexity of lambda-lifting, we partition the call graph of the source program into strongly connected components, based...... of lambda-lifting from O(n^3) to O(n^2) . where n is the size of the program. Since a lambda-lifter can output programs of size O(n^2), our algorithm is asympotically optimal....

  6. Construction of Structure of Indicators of Efficiency of Counteraction to Threats of Information Safety in Interests of the Estimation of Security of Information Processes in Computer Systems

    Directory of Open Access Journals (Sweden)

    A. P. Kurilo

    2010-06-01

    Full Text Available The theorem of system of indicators for an estimation of the security of information processes in the computer systems is formulated and proved. A number of the signs is proved, allowing to consider set of the indicators of efficiency of counteraction to the threats of information safety of the computer systems as the system.

  7. Estimating the Value of Price Risk Reduction in Energy Efficiency Investments in Buildings

    Directory of Open Access Journals (Sweden)

    Pekka Tuominen

    2017-10-01

    Full Text Available This paper presents a method for calculating the value of price risk reduction to a consumer that can be achieved with investments in energy efficiency. The value of price risk reduction is discussed to some length in general terms in the literature reviewed but, so far, no methodology for calculating the value has been presented. Here we suggest such a method. The problem of valuating price risk reduction is approached using a variation of the Black–Scholes model by considering a hypothetical financial instrument that a consumer would purchase to insure herself against unexpected price hikes. This hypothetical instrument is then compared with an actual energy efficiency investment that reaches the same level of price risk reduction. To demonstrate the usability of the method, case examples are calculated for typical single-family houses in Finland. The results show that the price risk entailed in household energy consumption can be reduced by a meaningful amount with energy efficiency investments, and that the monetary value of this reduction can be calculated. It is argued that this often-overlooked benefit of energy efficiency investments merits more consideration in future studies.

  8. Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems

    Science.gov (United States)

    Connolly, Patrick J.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  9. Estimating the Efficiency of Michigan's Rural and Urban Public School Districts

    Science.gov (United States)

    Maranowski, Rita

    2012-01-01

    This study examined student achievement in Michigan public school districts to determine if rural school districts are demonstrating greater financial efficiency by producing higher levels of student achievement than school districts in other geographic locations with similar socioeconomics. Three models were developed using multiple regression…

  10. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Feed efficiency is of major economic importance in beef production. The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait ...

  11. Evaluating Technical Efficiency of Nursing Care Using Data Envelopment Analysis and Multilevel Modeling.

    Science.gov (United States)

    Min, Ari; Park, Chang Gi; Scott, Linda D

    2016-05-23

    Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.

  12. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  13. Exploring the efficiency potential for an active magnetic regenerator

    DEFF Research Database (Denmark)

    Eriksen, Dan; Engelbrecht, Kurt; Haffenden Bahl, Christian Robert

    2016-01-01

    A novel rotary state of the art active magnetic regenerator refrigeration prototype was used in an experimental investigation with special focus on efficiency. Based on an applied cooling load, measured shaft power, and pumping power applied to the active magnetic regenerator, a maximum second-la...... and replacing the packed spheres with a theoretical parallel plate regenerator. Furthermore, significant potential efficiency improvements through optimized regenerator geometries are estimated and discussed......., especially for the pressure drop, significant improvements can be made to the machine. However, a large part of the losses may be attributed to regenerator irreversibilities. Considering these unchanged, an estimated upper limit to the second-law efficiency of 30% is given by eliminating parasitic losses...

  14. The relative efficiency of bank branches in lending and borrowing: An application of data envelopment analysis

    Directory of Open Access Journals (Sweden)

    G van der Westhuizen

    2014-08-01

    Full Text Available The relative efficiency of fifty-two branches of a small South African bank was estimated using Data Envelopment Analysis (DEA.  A factor responsible for the difference in efficiency between branches might be the difference in managing the asset (loans and the liability (deposit side of the balance sheet.  For this reason, the relative efficiency of the lending and borrowing activities was also estimated and compared to the relative efficiency of the combined (lending and borrowing activities.In the case of the efficiency estimates for loans and deposits, the indications are that the branches were more efficient in managing the liability side (deposits than  in managing the asset side (loans.  This means that purchased funds were not utilised efficiently.

  15. DEREGULATION, FINANCIAL CRISIS, AND BANK EFFICIENCY IN TAIWAN: AN ESTIMATION OF UNDESIRABLE OUTPUTS

    OpenAIRE

    Liao, Chang-Sheng

    2018-01-01

    Purpose- This study investigates the undesirable impacts of outputson bank efficiency and contributes to the literature by assessing howregulation policies and other events impact bank efficiency in Taiwan inregards to deregulation, financial crisis, and financial reform from 1993 to2011. Methodology- In order to effectively deal with both undesirableand desirable outputs, this study follows Seiford and Zhu (2002), who recommendusing the standard data envelopment analysis model to measure per...

  16. From Policy to Compliance: Federal Energy Efficient Product Procurement

    Energy Technology Data Exchange (ETDEWEB)

    DeMates, Laurèn [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scodel, Anna [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-09-06

    Federal buyers are required to purchase energy-efficient products in an effort to minimize energy use in the federal sector, save the federal government money, and spur market development of efficient products. The Federal Energy Management Program (FEMP)’s Energy Efficient Product Procurement (EEPP) Program helps federal agencies comply with the requirement to purchase energy-efficient products by providing technical assistance and guidance and setting efficiency requirements for certain product categories. Past studies have estimated the savings potential of purchasing energy-efficient products at over $500 million per year in energy costs across federal agencies.1 Despite the strong policy support for EEPP and resources available, energy-efficient product purchasing operates within complex decision-making processes and operational structures; implementation challenges exist that may hinder agencies’ ability to comply with purchasing requirements. The shift to purchasing green products, including energy-efficient products, relies on “buy in” from a variety of potential actors throughout different purchasing pathways. Challenges may be especially high for EEPP relative to other sustainable acquisition programs given that efficient products frequently have a higher first cost than non-efficient ones, which may be perceived as a conflict with fiscal responsibility, or more simply problematic for agency personnel trying to stretch limited budgets. Federal buyers may also face challenges in determining whether a given product is subject to EEPP requirements. Previous analysis on agency compliance with EEPP, conducted by the Alliance to Save Energy (ASE), shows that federal agencies are getting better at purchasing energy-efficient products. ASE conducted two reviews of relevant solicitations for product and service contracts listed on Federal Business Opportunities (FBO), the centralized website where federal agencies are required to post procurements greater

  17. Efficient emission fees in the US electricity sector

    International Nuclear Information System (INIS)

    Spencer Banzhaf, H.; Burtraw, Dallas; Palmer, Karen

    2004-01-01

    This paper provides new estimates of efficient emission fees for sulfur dioxide (SO 2 ) and nitrogen oxides (NO x ) emissions in the US electricity sector. The estimates are obtained by coupling a detailed simulation model of the US electricity markets with an integrated assessment model that links changes in emissions with atmospheric transport, environmental endpoints, and valuation of impacts. Efficient fees are found by comparing incremental benefits with emission fee levels. National quantity caps that are equivalent to these fees also are computed, and found to approximate caps under consideration in the current multi-pollutant debate in the US Congress and the recent proposals from the Bush administration for the electricity industry. We also explore whether regional differentiation of caps on different pollutants is likely to enhance efficiency

  18. Neural network fusion capabilities for efficient implementation of tracking algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  19. ESTIMATION OF LONG-TERM INVESTMENT PROJECTS WITH ENERGY-EFFICIENT SOLUTIONS BASED ON LIFE CYCLE COSTS INDICATOR

    Directory of Open Access Journals (Sweden)

    Bazhenov Viktor Ivanovich

    2015-09-01

    Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.

  20. Robust Estimation of Productivity Changes in Japanese Shinkin Banks

    Directory of Open Access Journals (Sweden)

    Jianzhong DAI

    2014-05-01

    Full Text Available This paper estimates productivity changes in Japanese shinkin banks during the fiscal years 2001 to 2008 using the Malmquist index as the measure of productivity change. Data envelopment analysis (DEA is used to estimate the index. We also apply a smoothed bootstrapping approach to set up confidence intervals for estimates and study their statistical characteristics. By analyzing estimated scores, we identify trends in productivity changes in Japanese shinkin banks during the study period and investigate the sources of these trends. We find that in the latter half of the study period, productivity has significantly declined, primarily because of deterioration in technical efficiency, but scale efficiency has been significantly improved. Grouping the total sample according to the levels of competition reveals more details of productivity changes in shinkin banks.

  1. The new natural gas futures market - is it efficient?

    International Nuclear Information System (INIS)

    Herbert, J.H.

    1993-01-01

    Aspects of the natural gas futures market are discussed. In particular, the efficiency of the natural gas futures market is evaluated using a regression equation. It is found that the market has behaved more like an inefficient market than an efficient one. A variety of tests are applied to the estimated equation. These tests suggest that the estimated equation provides a good summary of the relationship between spot and futures prices for the time period. In addition, the equation is found to produce accurate forecasts. (Author)

  2. Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data

    KAUST Repository

    Qahtan, Abdulhakim Ali Ali

    2016-01-01

    application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The third application

  3. 90 measurement of technical efficiency and its determinants in crop ...

    African Journals Online (AJOL)

    OLUWOLE AKINNAGBE

    They found that education, number of working animals, credit per acre and number of extension visits significantly increased cost efficiency while large land holding size significantly decreased cost efficiency. In single estimation approach of the technical efficiency model for Indian farmers,. Colli et al (1998), found that years ...

  4. Estimation of low-potential heat recuperation efficiency of smoke fumes in a condensation heat utilizer under various operation conditions of a boiler and a heating system

    Science.gov (United States)

    Ionkin, I. L.; Ragutkin, A. V.; Luning, B.; Zaichenko, M. N.

    2016-06-01

    For enhancement of the natural gas utilization efficiency in boilers, condensation heat utilizers of low-potential heat, which are constructed based on a contact heat exchanger, can be applied. A schematic of the contact heat exchanger with a humidifier for preheating and humidifying of air supplied in the boiler for combustion is given. Additional low-potential heat in this scheme is utilized for heating of the return delivery water supplied from a heating system. Preheating and humidifying of air supplied for combustion make it possible to use the condensation utilizer for heating of a heat-transfer agent to temperature exceeding the dewpoint temperature of water vapors contained in combustion products. The decision to mount the condensation heat utilizer on the boiler was taken based on the preliminary estimation of the additionally obtained heat. The operation efficiency of the condensation heat utilizer is determined by its structure and operation conditions of the boiler and the heating system. The software was developed for the thermal design of the condensation heat utilizer equipped by the humidifier. Computation investigations of its operation are carried out as a function of various operation parameters of the boiler and the heating system (temperature of the return delivery water and smoke fumes, air excess, air temperature at the inlet and outlet of the condensation heat utilizer, heating and humidifying of air in the humidifier, and portion of the circulating water). The heat recuperation efficiency is estimated for various operation conditions of the boiler and the condensation heat utilizer. Recommendations on the most effective application of the condensation heat utilizer are developed.

  5. Isobars and the Efficient Market Hypothesis

    OpenAIRE

    Kristýna Ivanková

    2010-01-01

    Isobar surfaces, a method for describing the overall shape of multidimensional data, are estimated by nonparametric regression and used to evaluate the efficiency of selected markets based on returns of their stock market indices.

  6. Efficient coordinated recovery of sparse channels in massive MIMO

    KAUST Repository

    Masood, Mudassir

    2015-01-01

    This paper addresses the problem of estimating sparse channels in massive MIMO-OFDM systems. Most wireless channels are sparse in nature with large delay spread. In addition, these channels as observed by multiple antennas in a neighborhood have approximately common support. The sparsity and common support properties are attractive when it comes to the efficient estimation of large number of channels in massive MIMO systems. Moreover, to avoid pilot contamination and to achieve better spectral efficiency, it is important to use a small number of pilots. We present a novel channel estimation approach which utilizes the sparsity and common support properties to estimate sparse channels and requires a small number of pilots. Two algorithms based on this approach have been developed that perform Bayesian estimates of sparse channels even when the prior is non-Gaussian or unknown. Neighboring antennas share among each other their beliefs about the locations of active channel taps to perform estimation. The coordinated approach improves channel estimates and also reduces the required number of pilots. Further improvement is achieved by the data-aided version of the algorithm. Extensive simulation results are provided to demonstrate the performance of the proposed algorithms.

  7. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    Science.gov (United States)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  8. Estimating the Influence of Housing Energy Efficiency and Overheating Adaptations on Heat-Related Mortality in the West Midlands, UK

    Directory of Open Access Journals (Sweden)

    Jonathon Taylor

    2018-05-01

    Full Text Available Mortality rates rise during hot weather in England, and projected future increases in heatwave frequency and intensity require the development of heat protection measures such as the adaptation of housing to reduce indoor overheating. We apply a combined building physics and health model to dwellings in the West Midlands, UK, using an English Housing Survey (EHS-derived stock model. Regional temperature exposures, heat-related mortality risk, and space heating energy consumption were estimated for 2030s, 2050s, and 2080s medium emissions climates prior to and following heat mitigating, energy-efficiency, and occupant behaviour adaptations. Risk variation across adaptations, dwellings, and occupant types were assessed. Indoor temperatures were greatest in converted flats, while heat mortality rates were highest in bungalows due to the occupant age profiles. Full energy efficiency retrofit reduced regional domestic space heating energy use by 26% but increased summertime heat mortality 3–4%, while reduced façade absorptance decreased heat mortality 12–15% but increased energy consumption by 4%. External shutters provided the largest reduction in heat mortality (37–43%, while closed windows caused a large increase in risk (29–64%. Ensuring adequate post-retrofit ventilation, targeted installation of shutters, and ensuring operable windows in dwellings with heat-vulnerable occupants may save energy and significantly reduce heat-related mortality.

  9. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  10. Testing for Stochastic Dominance Efficiency

    NARCIS (Netherlands)

    G.T. Post (Thierry); O. Linton; Y-J. Whang

    2005-01-01

    textabstractWe propose a new test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our

  11. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  12. Quick assessment of binary distillation efficiency using a heat engine perspective

    International Nuclear Information System (INIS)

    Blahušiak, M.; Kiss, A.A.; Kersten, S.R.A.; Schuur, B.

    2016-01-01

    With emphasis on close boiling, (near-)ideal VLE mixtures, this paper links the efficiency of distillation to the binary feed composition and thermal properties of the compounds. The proposed approach, treating the process as a heat engine, allows to directly quantify distillation performance (in terms of energy intensity & efficiency) based on the components boiling points and feed composition. In addition, this approach reviews and formulates simple, approximate and essentially non-iterative calculation procedures to quickly estimate the energy efficiency of distillation. These estimations may be applied to identify opportunities to save significant amounts of energy. The results show that the reboiler duty for low relative volatility is relatively independent of the heat of vaporization and feed composition, while being reciprocally proportional to the Carnot efficiency of the distillation column. The internal efficiency for distillation of mixtures with low relative volatility has a maximum of about 70% for a symmetrical feed (equimolar ratio) and decreases to zero for unsymmetrical feed compositions approaching infinite dilution. With increasing relative volatility, the maximum efficiency is preserved, but the locus shifts towards lower light component fractions. At very high relative volatility, the internal efficiency increases with decreasing concentration of light component, as typical for evaporators. - Highlights: • A heat engine perspective was applied to estimate binary distillation efficiency. • The method was derived from first principles. • Validation on industrial cases showed the strength of the method.

  13. Dependability estimation for non-Markov consecutive-k-out-of-n: F repairable systems by fast simulation

    International Nuclear Information System (INIS)

    Xiao Gang; Li Zhizhong; Li Ting

    2007-01-01

    A model of consecutive-k-out-of-n: F repairable system with non-exponential repair time distribution and (k-1)-step Markov dependence is introduced in this paper along with algorithms of three Monte Carlo methods, i.e. importance sampling, conditional expectation estimation and combination of the two methods, to estimate dependability of the non-Markov model including reliability, transient unavailability, MTTF, and MTBF. A numerical example is presented to demonstrate the efficiencies of above methods. The results show that combinational method has the highest efficiency for estimation of unreliability and unavailability, while conditional expectation estimation is the most efficient method for estimation of MTTF and MTBF. Conditional expectation estimation seems to have overall higher speedups in estimating dependability of such systems

  14. Comparison of water-use efficiency estimates based on tree-ring carbon isotopes with simulations of a dynamic vegetation model

    Science.gov (United States)

    Saurer, Matthias; Renato, Spahni; Fortunat, Joos; David, Frank; Kerstin, Treydte; Rolf, Siegwolf

    2015-04-01

    Tree-ring d13C-based estimates of intrinsic water-use efficiency (iWUE, reflecting the ratio of assimilation A to stomatal conductance gs) generally show a strong increase during the industrial period, likely associated with the increase in atmospheric CO2. However, it is not clear, first, if tree-ring d13C-derived iWUE-values indeed reflect actual plant and ecosystem-scale variability in fluxes and, second, what physiological changes were the drivers of the observed iWUE increase, changes in A or gs or both. To address these questions, we used a complex dynamic vegetation model (LPX) that combines process-based vegetation dynamics with land-atmosphere carbon and water exchange. The analysis was conducted for three functional types, representing conifers, oaks, larch, and various sites in Europe, where tree-ring isotope data are available. The increase in iWUE over the 20th century was comparable in LPX-simulations as in tree-ring-estimates, strengthening confidence in these results. Furthermore, the results from the LPX model suggest that the cause of the iWUE increase was reduced stomatal conductance during recent decades rather than increased assimilation. High-frequency variation reflects the influence of climate, like for example the 1976 summer drought, resulting in strongly reduced A and g in the model, particularly for oak.

  15. Cilioprotists as biological indicators for estimating the efficiency of using Gravel Bed Hydroponics System in domestic wastewater treatment.

    Science.gov (United States)

    El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham

    2014-07-01

    Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.

  16. Estimating the Efficiency of Therapy Groups in a College Counseling Center

    Science.gov (United States)

    Weatherford, Ryan D.

    2017-01-01

    College counseling centers are facing rapidly increasing demands for services and are tasked to find efficient ways of providing adequate services while managing limited space. The use of therapy groups has been proposed as a method of managing demand. This brief report examines the clinical time savings of a traditional group therapy program in a…

  17. Cost-Benefit of Improving the Efficiency of Room Air Conditioners (Inverter and Fixed Speed) in India

    Energy Technology Data Exchange (ETDEWEB)

    Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Abhyankar, Nikit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Diddi, Saurabh [Bureau of Energy Efficiency, Government of India (India); Ahuja, Deepanshu [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Mukherjee, P. K. [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Walia, Archana [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States)

    2016-06-01

    Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment

  18. Efficiency of the building societies in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Lukáš Leksovský

    2011-01-01

    Full Text Available This paper is the first attempt to analyze efficiency of building societies in the Czech Republic. We apply non-parametric method Data Envelopment Analysis on data from all building societies in the sector over the period 2002–2008. Having deposits received and administrative expenses as inputs and volume of loans disbursed as output we estimate efficiency scores of all individual building societies as well as calculate the average efficiency in the industry. For this purpose we use two alternative models that allows for constant and variable returns of scale respectively. The results suggest that there is no significant improvement in efficiency of building societies during the estimation period. Furthermore, most of the building societies have not been operating at appropriate size. We also found that Českomoravská stavební spořitelna, a. s. was the most efficient building society in the Czech Republic according to the both models applied. In order to increase efficiency, we suggest reduction in the number of external employees and agents or increase of their productivity, more sophisticated products that can outperform the standard services and effective response to changes in the legislature.

  19. Measuring cardiac efficiency using PET/MRI

    International Nuclear Information System (INIS)

    Gullberg, Grand; Aparici, Carina Mari; Brooks, Gabriel; Liu, Jing; Guccione, Julius; Saloner, David; Seo, Adam Youngho; Ordovas, Karen Gomes

    2015-01-01

    Heart failure (HF) is a complex syndrome that is projected by the American Heart Association to cost $160 billion by 2030. In HF, significant metabolic changes and structural remodeling lead to reduced cardiac efficiency. A normal heart is approximately 20-25% efficient measured by the ratio of work to oxygen utilization (1 ml oxygen = 21 joules). The heart requires rapid production of ATP where there is complete turnover of ATP every 10 seconds with 90% of ATP produced by mitochondrial oxidative metabolism requiring substrates of approximately 30% glucose and 65% fatty acids. In our preclinical PET/MRI studies in normal rats, we showed a negative correlation between work and the influx rate constant for 18FDG, confirming that glucose is not the preferred substrate at rest. However, even though fatty acid provides 9 kcal/gram compared to 4 kcal/gram for glucose, in HF the preferred energy source is glucose. PET/MRI offers the potential to study this maladapted mechanism of metabolism by measuring work in a region of myocardial tissue simultaneously with the measure of oxygen utilization, glucose, and fatty acid metabolism and to study cardiac efficiency in the etiology of and therapies for HF. MRI is used to measure strain and a finite element mechanical model using pressure measurements is used to estimate myofiber stress. The integral of strain times stress provides a measure of work which divided by energy utilization, estimated by the production of 11CO2 from intravenous injection of 11C-acetate, provides a measure of cardiac efficiency. Our project involves translating our preclinical research to the clinical application of measuring cardiac efficiency in patients. Using PET/MRI to develop technologies for studying myocardial efficiency in patients, provides an opportunity to relate cardiac work of specific tissue regions to metabolic substrates, and measure the heterogeneity of LV efficiency.

  20. Measuring cardiac efficiency using PET/MRI

    Energy Technology Data Exchange (ETDEWEB)

    Gullberg, Grand [Lawrence Berkeley National Laboratory (United States); Aparici, Carina Mari; Brooks, Gabriel [University of California San Francisco (United States); Liu, Jing; Guccione, Julius; Saloner, David; Seo, Adam Youngho; Ordovas, Karen Gomes [Lawrence Berkeley National Laboratory (United States)

    2015-05-18

    Heart failure (HF) is a complex syndrome that is projected by the American Heart Association to cost $160 billion by 2030. In HF, significant metabolic changes and structural remodeling lead to reduced cardiac efficiency. A normal heart is approximately 20-25% efficient measured by the ratio of work to oxygen utilization (1 ml oxygen = 21 joules). The heart requires rapid production of ATP where there is complete turnover of ATP every 10 seconds with 90% of ATP produced by mitochondrial oxidative metabolism requiring substrates of approximately 30% glucose and 65% fatty acids. In our preclinical PET/MRI studies in normal rats, we showed a negative correlation between work and the influx rate constant for 18FDG, confirming that glucose is not the preferred substrate at rest. However, even though fatty acid provides 9 kcal/gram compared to 4 kcal/gram for glucose, in HF the preferred energy source is glucose. PET/MRI offers the potential to study this maladapted mechanism of metabolism by measuring work in a region of myocardial tissue simultaneously with the measure of oxygen utilization, glucose, and fatty acid metabolism and to study cardiac efficiency in the etiology of and therapies for HF. MRI is used to measure strain and a finite element mechanical model using pressure measurements is used to estimate myofiber stress. The integral of strain times stress provides a measure of work which divided by energy utilization, estimated by the production of 11CO2 from intravenous injection of 11C-acetate, provides a measure of cardiac efficiency. Our project involves translating our preclinical research to the clinical application of measuring cardiac efficiency in patients. Using PET/MRI to develop technologies for studying myocardial efficiency in patients, provides an opportunity to relate cardiac work of specific tissue regions to metabolic substrates, and measure the heterogeneity of LV efficiency.

  1. Economical efficiency estimation of the power system with an accelerator breeder

    International Nuclear Information System (INIS)

    Rublev, O.V.; Komin, A.V.

    1990-01-01

    The review deals with economical indices of nuclear power system with an accelerator breeder producing secondary nuclear fuel. Electric power cost was estimated by the method of discounted cost. Power system with accelerator breeder compares unfavourably with traditional nuclear power systems with respect to its capitalized cost

  2. A Numerical and Experimental Study of Local Exhaust Capture Efficiency

    DEFF Research Database (Denmark)

    Madsen, U.; Breum, N. O.; Nielsen, Peter Vilhelm

    1993-01-01

    Direct capture efficiency of a local exhaust system is defined by introducing an imaginary control box surrounding the contaminant source and the exhaust opening. The imaginary box makes it possible to distinguish between contaminants directly captured and those that escape. Two methods for estim...... location is less important for the case studied. The choice of sampling strategy to obtain a representative background concentration is essential as substantial differences on direct capture efficiency are found. Recommendations are given......Direct capture efficiency of a local exhaust system is defined by introducing an imaginary control box surrounding the contaminant source and the exhaust opening. The imaginary box makes it possible to distinguish between contaminants directly captured and those that escape. Two methods...... for estimation of direct capture efficiency are given: (1) a numerical method based on the time-averaged Navier-Stokes equations for turbulent flows; and (2) a field method based on a representative background concentration. Direct capture efficiency is sensitive to the size of the control box, whereas its...

  3. Optimal Smoothing in Adaptive Location Estimation

    OpenAIRE

    Mammen, Enno; Park, Byeong U.

    1997-01-01

    In this paper higher order performance of kernel basedadaptive location estimators are considered. Optimalchoice of smoothing parameters is discussed and it isshown how much is lossed in efficiency by not knowingthe underlying translation density.

  4. Efficient scale for photovoltaic systems and Florida's solar rebate program

    International Nuclear Information System (INIS)

    Burkart, Christopher S.; Arguea, Nestor M.

    2012-01-01

    This paper presents a critical view of Florida's photovoltaic (PV) subsidy system and proposes an econometric model of PV system installation and generation costs. Using information on currently installed systems, average installation cost relations for residential and commercial systems are estimated and cost-efficient scales of installation panel wattage are identified. Productive efficiency in annual generating capacity is also examined under flexible panel efficiency assumptions. We identify potential gains in efficiency and suggest changes in subsidy system constraints, providing important guidance for the implementation of future incentive programs. Specifically, we find that the subsidy system discouraged residential applicants from installing at the cost-efficient scale but over-incentivized commercial applicants, resulting in inefficiently sized installations. - Highlights: ► Describe a PV solar incentive system in the U.S. state of Florida. ► Combine geocoded installation site data with a detailed irradiance map. ► Estimate installation and production costs across a large sample. ► Identify inefficiencies in the incentive system. ► Suggest changes to policy that would improve economic efficiency.

  5. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  6. The Approach to an Estimation of a Local Area Network Functioning Efficiency

    Directory of Open Access Journals (Sweden)

    M. M. Taraskin

    2010-09-01

    Full Text Available In the article authors call attention to a choice of system of metrics, which permits to take a qualitative assessment of local area network functioning efficiency in condition of computer attacks.

  7. Technical efficiency of small-scale fishing households in Tanzanian ...

    African Journals Online (AJOL)

    This paper examines the technical efficiency of Tanzanian small-scale fishing households, based on data from two coastal villages located near Bagamoyo and Zanzibar, using a stochastic frontier model with technical inefficiency. The estimated mean technical efficiency of small-scale fishing households was 52%, showing ...

  8. Cost-Benefit of Improving the Efficiency of Room Air Conditioners (Inverter and Fixed Speed) in India

    Energy Technology Data Exchange (ETDEWEB)

    Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Abhyankar, Nikit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Diddi, Saurabh [Government of India, New Delhi (India). Bureau of Energy Efficiency; Ahuja, Deepanshu [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Mukherjee, P. K. [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Walia, Archana [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States)

    2016-06-30

    Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. We assess several efficiency levels, two of which are summarized below in the report. The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one star level) should be evaluated rigorously considering significant benefits to consumers, energy security and environment.

  9. Determinants of efficiency in the provision of municipal street-cleaning and refuse collection services.

    Science.gov (United States)

    Benito-López, Bernardino; Moreno-Enguix, María del Rocio; Solana-Ibañez, José

    2011-06-01

    Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply the second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Determinants of efficiency in the provision of municipal street-cleaning and refuse collection services

    International Nuclear Information System (INIS)

    Benito-Lopez, Bernardino; Rocio Moreno-Enguix, Maria del; Solana-Ibanez, Jose

    2011-01-01

    Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply the second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient.

  11. Efficiency of European Dairy Processing Firms

    NARCIS (Netherlands)

    Soboh, R.A.M.E.; Oude Lansink, A.G.J.M.; Dijk, van G.

    2014-01-01

    This paper compares the technical efficiency and production frontier of dairy processing cooperativesand investor owned firms in six major dairy producing European countries. Two parametric produc-tion frontiers are estimated, i.e. for cooperatives and investor owned firms separately, which are

  12. Analysis of factors affecting the technical efficiency of cocoa ...

    African Journals Online (AJOL)

    The study estimated the technical efficiency of cocoa producers and the socioeconomic factors influencing technical efficiency and identified the constraints to cocoa production. A multi-stage random sampling method was used to select 180 cocoa farmers who were interviewed for the study. Data on the inputs used and ...

  13. Crop and soil specific N and P efficiency and productivity in Finland

    Directory of Open Access Journals (Sweden)

    S. BÄCKMAN

    2008-12-01

    Full Text Available This paper estimates a stochastic production frontier based on experimental data of cereals production in Finland over the period 1977-1994. The estimates of the production frontier are used to analyze nitrogen and phosphorous productivity and efficiency differences between soils and crops. For this input specific efficiencies are calculated. The results can be used to recognize relations between fertilizer management and soil types as well as to learn where certain soil types and crop combinations require special attention to fertilization strategy. The combination of inputs as designed by the experiment shows significant inefficiencies for both N and P. The measures of mineral productivity and efficiency indicate that clay is the most mineral efficient and productive soil while silt and organic soils are the least efficient and productive soils. Furthermore, a positive correlation is found between mineral productivity and efficiency. The results indicate that substantial technical efficiency differences between different experiments prevail.;

  14. Application of independent component analysis for speech-music separation using an efficient score function estimation

    Science.gov (United States)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  15. Estimation of Resource Productivity and Efficiency: An Extended Evaluation of Sustainability Related to Material Flow

    Directory of Open Access Journals (Sweden)

    Pin-Chih Wang

    2014-09-01

    Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.

  16. Factors affecting the technical efficiency of dairy farms in Kosovo

    Directory of Open Access Journals (Sweden)

    Egzon BAJRAMI

    2017-11-01

    Full Text Available A possible accession into the World Trade Organization (WTO and an expected membership in the European Union raise significant opportunities and challenges for the agricultural sector in Kosovo. As a result of these changes, the sector will have to improve efficiency and competitiveness. This research is motivated by the need to understand better the forces that drive competitiveness in the Kosovo dairy sector. This study estimates the technical efficiency (TE of 243 dairy farms in Kosovo and relates TE variation to farm size and other primary determinants of TE. A stochastic frontier production function is estimated using a two-stage procedure. Results reveal that concentrate feed intake, land use per cow, and the number of days cows had been kept on pasture have statistically significant impacts on milk productivity per cow. The mean technical efficiency of dairy farms was estimated at 0.72. The major determinants that increase efficiency are breed improvement, intensification of corn production on the farm, improving concentrate feed intake, and using free-range production systems. Given the results from the technical efficiency analysis, it is crucial for the Government of Kosovo to redesign their dairy policy—specifically their grant investment schemes—and target assistance on improving national herd genetics, promoting free range systems and expanding area planted in corn.

  17. Efficiency Analysis of a Wave Power Generation System by Using Multibody Dynamics

    International Nuclear Information System (INIS)

    Kim, Min Soo; Sohn, Jeong Hyun; Kim, Jung Hee; Sung, Yong Jun

    2016-01-01

    The energy absorption efficiency of a wave power generation system is calculated as the ratio of the wave power to the power of the system. Because absorption efficiency depends on the dynamic behavior of the wave power generation system, a dynamic analysis of the wave power generation system is required to estimate the energy absorption efficiency of the system. In this study, a dynamic analysis of the wave power generation system under wave loads is performed to estimate the energy absorption efficiency. RecurDyn is employed to carry out the dynamic analysis of the system, and the Morison equation is used for the wave load model. According to the results, the lower the wave height and the shorter the period, the higher is the absorption efficiency of the system

  18. Comparison and Evaluation of Bank Efficiency in Austria and the Czech Republic

    Directory of Open Access Journals (Sweden)

    Svitalkova Zuzana

    2014-06-01

    Full Text Available This article compares and evaluates the efficiency of the banking sector in Austria and the Czech Republic in the period 2004-2011. The paper is divided into the following parts. It begins with a literature review dealing with the bank efficiency generally and then with the efficiency of the banking sector in chosen countries. The second section provides an overview of used methodology. The non-parametric Data Envelopment Analysis (DEA with undesirable output is used for estimating the efficiency. The undesirable output is usually omitted by current authors. Simultaneously were used CCR and BCC models that differ in returns to scale. Section three summarizes the results, discusses them and compares the estimated efficiency rates in both states. This study also attempts to further identify the main sources of inefficiency.

  19. Efficiency Analysis of a Wave Power Generation System by Using Multibody Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Soo; Sohn, Jeong Hyun [Pukyong National Univ., Busan (Korea, Republic of); Kim, Jung Hee; Sung, Yong Jun [INGINE Inc., Seoul (Korea, Republic of)

    2016-06-15

    The energy absorption efficiency of a wave power generation system is calculated as the ratio of the wave power to the power of the system. Because absorption efficiency depends on the dynamic behavior of the wave power generation system, a dynamic analysis of the wave power generation system is required to estimate the energy absorption efficiency of the system. In this study, a dynamic analysis of the wave power generation system under wave loads is performed to estimate the energy absorption efficiency. RecurDyn is employed to carry out the dynamic analysis of the system, and the Morison equation is used for the wave load model. According to the results, the lower the wave height and the shorter the period, the higher is the absorption efficiency of the system.

  20. Determination of the aerosol filters efficiency by means of the tracer techniques

    International Nuclear Information System (INIS)

    Hirling, J.

    1978-01-01

    Estimation of the nonradioactive methods of filters efficiency determination and tracer techniques are given. The methods are stated and discriptions of the instrumentation for estimation of the filters efficiency are given, in particular: methodology of production of the radioactive synthetic test-aerosols by means of the disperse and steamcondensation aerosol generators; the radio isotope method of the aerosol filters investigations; the methodology of filtartion efficiency determination. The results are given of the radioisotope investigations of filters; properties of the artificial radioactive test-aerosols; characteristics of filters, determined by the tracer techniques. Curves are given for the filtration efficiency of the viscose filtering nozzles of different density depending on the filters load. (I.T.) [ru

  1. Estimation of the drift eliminator efficiency using numerical and experimental methods

    Directory of Open Access Journals (Sweden)

    Stodůlka Jiří

    2016-01-01

    Full Text Available The purpose of the drift eliminators is to prevent water from escaping in significant amounts the cooling tower. They are designed to catch the droplets dragged by the tower draft and the efficiency given by the shape of the eliminator is the main evaluation criteria. The ability to eliminate the escaping water droplets is studied using CFD and using the experimental IPI method.

  2. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  3. On the estimation of the steam generator maintenance efficiency by the means of probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Cizelj, L.

    1994-10-01

    In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL) [de

  4. Barriers to Industrial Energy Efficiency - Study (Appendix A), June 2015

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-06-01

    This study examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This study also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.

  5. Barriers to Industrial Energy Efficiency - Report to Congress, June 2015

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-06-01

    This report examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This report also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.

  6. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  7. Malware Function Estimation Using API in Initial Behavior

    OpenAIRE

    KAWAGUCHI, Naoto; OMOTE, Kazumasa

    2017-01-01

    Malware proliferation has become a serious threat to the Internet in recent years. Most current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze malware. However, estimating the malware functions has been difficult due to the increasing sophistication of malware. Actually, the previous researches do not estimate the...

  8. Technical efficiency in milk production in underdeveloped production environment of India*.

    Science.gov (United States)

    Bardhan, Dwaipayan; Sharma, Murari Lal

    2013-12-01

    The study was undertaken in Kumaon division of Uttarakhand state of India with the objective of estimating technical efficiency in milk production across different herd-size category households and factors influencing it. Total of 60 farm households having representation from different herd-size categories drawn from six randomly selected villages of plain and hilly regions of the division constituted the ultimate sampling units of the study. Stochastic frontier production function analysis was used to estimate the technical efficiency in milk production. Multivariate regression equations were fitted taking technical efficiency index as the regressand to identify the factors significantly influencing technical efficiency in milk production. The study revealed that variation in output across farms in the study area was due to difference in their technical efficiency levels. However, it was interesting to note that smallholder producers were more technically efficient in milk production than their larger counterparts, especially in the plains. Apart from herd size, intensity of market participation had significant and positive impact on technical efficiency in the plains. This provides definite indication that increasing the level of commercialization of dairy farms would have beneficial impact on their production efficiency.

  9. Energy Efficiency Roadmap for Uganda, Making Energy Efficiency Count. Executive Summary

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; Pudleiner, David; Jones, David; Khan, Aleisha

    2017-06-15

    Like many countries in Sub-Saharan Africa, Uganda has focused its energy sector investments largely on increasing energy access by increasing energy supply. The links between energy efficiency and energy access, the importance of energy efficiency in new energy supply, and the multiple benefits of energy efficiency for the level and quality of energy available, have been largely overlooked. Implementing energy efficiency in parallel with expanding both the electricity grid and new clean energy generation reduces electricity demand and helps optimize the power supply so that it can serve more customers reliably at minimum cost. Ensuring efficient appliances are incorporated into energy access efforts provides improved energy services to customers. Energy efficiency is an important contributor to access to modern energy. This Energy Efficiency Roadmap for Uganda (Roadmap) is a response to the important role that electrical energy efficiency can play in meeting Uganda’s energy goals. Power Africa and the United Nations Sustainable Energy for All (SEforALL) initiatives collaborated with more than 24 stakeholders in Uganda to develop this document. The document estimates that if the most efficient technologies on the market were adopted, 2,224 gigawatt hours could be saved in 2030 across all sectors, representing 31% of the projected load. This translates into 341 megawatts of peak demand reductions, energy access to an additional 6 million rural customers and reduction of carbon dioxide emissions by 10.6 million tonnes in 2030. The Roadmap also finds that 91% of this technical potential is cost-effective, and 47% is achievable under conservative assumptions. The Roadmap prioritizes recommendations for implementing energy efficiency and maximizing benefits to meet the goals and priorities established in Uganda’s 2015 SEforALL Action Agenda. One important step is to create and increase demand for efficiency through long-term enabling policies and financial incentives

  10. Improving cluster-based missing value estimation of DNA microarray data.

    Science.gov (United States)

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  11. The Efficiency of Educational Production

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben

    is the most efficient Nordic country (often fully efficient), whereas Sweden and especially Norway and Denmark are clearly inefficient. However, using PISA test scores as indicators of student input quality in upper secondary education reduces the inefficiencies of these three countries. Also, when expected......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use Data Envelopment Analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...

  12. The Efficiency of Educational Production

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben

    2015-01-01

    is the most efficient Nordic country (often fully efficient), whereas Sweden and especially Norway and Denmark are clearly inefficient. However, using PISA test scores as indicators of student input quality in upper secondary education reduces the inefficiencies of these three countries. Also, when expected......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use data envelopment analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...

  13. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  14. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  15. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  16. The determinants of cost efficiency of hydroelectric generating plants: A random frontier approach

    International Nuclear Information System (INIS)

    Barros, Carlos P.; Peypoch, Nicolas

    2007-01-01

    This paper analyses the technical efficiency in the hydroelectric generating plants of a main Portuguese electricity enterprise EDP (Electricity of Portugal) between 1994 and 2004, investigating the role played by increase in competition and regulation. A random cost frontier method is adopted. A translog frontier model is used and the maximum likelihood estimation technique is employed to estimate the empirical model. We estimate the efficiency scores and decompose the exogenous variables into homogeneous and heterogeneous. It is concluded that production and capacity are heterogeneous, signifying that the hydroelectric generating plants are very distinct and therefore any energy policy should take into account this heterogeneity. It is also concluded that competition, rather than regulation, plays the key role in increasing hydroelectric plant efficiency

  17. Kyiv institutional buildings sector energy efficiency program: Technical assessment

    Energy Technology Data Exchange (ETDEWEB)

    Secrest, T.J.; Freeman, S.L. [Pacific Northwest National Lab., Richland, WA (United States); Popelka, A. [Tysak Engineering, Acton, MA (United States); Shestopal, P.A.; Gagurin, E.V. [Agency for Rational Energy Use and Ecology, Kyiv (Ukraine)

    1997-08-01

    The purpose of this assessment is to characterize the economic energy efficiency potential and investment requirements for space heating and hot water provided by district heat in the stock of state and municipal institutional buildings in the city of Kyiv. The assessment involves three activities. The first is a survey of state and municipal institutions to characterize the stock of institutional buildings. The second is to develop an estimate of the cost-effective efficiency potential. The third is to estimate the investment requirements to acquire the efficiency resource. Institutional buildings are defined as nonresidential buildings owned and occupied by state and municipal organizations. General categories of institutional buildings are education, healthcare, and cultural. The characterization activity provides information about the number of buildings, building floorspace, and consumption of space heating and hot water energy provided by the district system.

  18. Low Impedance Voice Coils for Improved Loudspeaker Efficiency

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Knott, Arnold; Andersen, Michael A. E.

    2015-01-01

    In modern audio systems utilizing switch-mode amplifiers the total efficiency is dominated by the rather poor efficiency of the loudspeaker. For decades voice coils have been designed so that nominal resistances of 4 to 8 Ohms is obtained, despite modern audio amplifiers, using switch-mode techno...... responses are estimated. For this woofer it is shown that the sensitivity can be improved approximately 1 dB, corresponding to a 30% efficiency improvement, just by increasing the fill factor using a low impedance voice coil with rectangular wire....

  19. Re-estimation of Motion and Reconstruction for Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Raket, Lars Lau; Forchhammer, Søren

    2014-01-01

    Transform domain Wyner-Ziv (TDWZ) video coding is an efficient approach to distributed video coding (DVC), which provides low complexity encoding by exploiting the source statistics at the decoder side. The DVC coding efficiency depends mainly on side information and noise modeling. This paper...... proposes a motion re-estimation technique based on optical flow to improve side information and noise residual frames by taking partially decoded information into account. To improve noise modeling, a noise residual motion re-estimation technique is proposed. Residual motion compensation with motion...

  20. Deriving a light use efficiency estimation algorithm using in situ hyperspectral and eddy covariance measurements for a maize canopy in Northeast China.

    Science.gov (United States)

    Zhang, Feng; Zhou, Guangsheng

    2017-07-01

    We estimated the light use efficiency ( LUE ) via vegetation canopy chlorophyll content ( CCC canopy ) based on in situ measurements of spectral reflectance, biophysical characteristics, ecosystem CO 2 fluxes and micrometeorological factors over a maize canopy in Northeast China. The results showed that among the common chlorophyll-related vegetation indices (VIs), CCC canopy had the most obviously exponential relationships with the red edge position (REP) ( R 2  = .97, p  <   .001) and normalized difference vegetation index (NDVI) ( R 2  = .91, p  <   .001). In a comparison of the indicating performances of NDVI, ratio vegetation index (RVI), wide dynamic range vegetation index (WDRVI), and 2-band enhanced vegetation index (EVI2) when estimating CCC canopy using all of the possible combinations of two separate wavelengths in the range 400-1300 nm, EVI2 [1214, 1259] and EVI2 [726, 1248] were better indicators, with R 2 values of .92 and .90 ( p  <   .001). Remotely monitoring LUE through estimating CCC canopy derived from field spectrometry data provided accurate prediction of midday gross primary productivity ( GPP ) in a rainfed maize agro-ecosystem ( R 2  = .95, p  <   .001). This study provides a new paradigm for monitoring vegetation GPP based on the combination of LUE models with plant physiological properties.

  1. Energy productivity and efficiency of wheat farming in Bangladesh

    International Nuclear Information System (INIS)

    Rahman, Sanzidur; Hasan, M. Kamrul

    2014-01-01

    Wheat is the second most important cereal crop in Bangladesh and production is highly sensitive to variations in the environment. We estimate productivity and energy efficiency of wheat farming in Bangladesh by applying a stochastic production frontier approach while accounting for the environmental constraints affecting production. Wheat farming is energy efficient with a net energy balance of 20,596 MJ per ha and energy ratio of 2.34. Environmental constraints such as a combination of unsuitable land, weed and pest attack, bad weather, planting delay and infertile soils significantly reduce wheat production and its energy efficiency. Environmental constraints account for a mean energy efficiency of 3 percentage points. Mean technical efficiency is 88% thereby indicating that elimination of inefficiencies can increase wheat energy output by 12%. Farmers' education, access to agricultural information and training in wheat production significantly improves efficiency, whereas events such as a delay in planting and first fertilization significantly reduce it. Policy recommendations include development of varieties that are resistant to environmental constraints and suitable for marginal areas; improvement of wheat farming practices; and investments in education and training of farmers as well as dissemination of information. - Highlights: • Bangladesh wheat farming is energy efficient at 20,596 MJha −1 ; energy ratio 2.34. • Environmental factors significantly influence productivity and energy efficiency. • Environmental factors must be taken into account when estimating wheat productivity. • Government policies must focus on ways of alleviating environmental factors. • Farmers' education, training and information sources increase technical efficiency

  2. Measuring industrial energy efficiency: Physical volume versus economic value

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, S.L.; Niefer, M.J.; Roop, J.M.

    1996-12-01

    This report examines several different measures of industrial output for use in constructing estimates of industrial energy efficiency and discusses some reasons for differences between the measures. Estimates of volume-based measures of output, as well as 3 value-based measures of output (value of production, value of shipments, and value added), are evaluated for 15 separate 4-digit industries. Volatility, simple growth rate, and trend growth rate estimates are made for each industry and each measure of output. Correlations are made between the volume- and value-based measures of output. Historical energy use data are collected for 5 of the industries for making energy- intensity estimates. Growth rates in energy use, energy intensity, and correlations between volume- and value-based measures of energy intensity are computed. There is large variability in growth trend estimates both long term and from year to year. While there is a high correlation between volume- and value-based measures of output for a few industries, typically the correlation is low, and this is exacerbated for estimates of energy intensity. Analysis revealed reasons for these low correlations. It appears that substantial work must be done before reliable measures of trends in the energy efficiency of industry can be accurately characterized.

  3. Estimation of energy potential of agricultural enterprise biomass

    Directory of Open Access Journals (Sweden)

    Lypchuk Vasyl

    2017-01-01

    Full Text Available Bioenergetics (obtaining of energy from biomass is one of innovative directions in energy branch of Ukraine. Correct and reliable estimation of biomass potential is essential for efficient use of it. The article reveals the issue of estimation of potential of biomass, obtained from byproducts of crop production and animal breeding, which can be used for power supply of agricultural enterprises. The given analysis was carried with application of common methodological fundamentals, revealed in the estimation of production structure of agricultural enterprises, structure of land employment, efficiency of crops growing, indicators of output of main and by-products, as well as normative (standard parameters of power output of energy raw material in relation to the chosen technology of its utilization. Results of the research prove high energy potential of byproducts of crop production and animal breeding at all of the studied enterprises, which should force its practical use.

  4. Modeling international trends in energy efficiency

    International Nuclear Information System (INIS)

    Stern, David I.

    2012-01-01

    I use a stochastic production frontier to model energy efficiency trends in 85 countries over a 37-year period. Differences in energy efficiency across countries are modeled as a stochastic function of explanatory variables and I estimate the model using the cross-section of time-averaged data, so that no structure is imposed on technological change over time. Energy efficiency is measured using a new energy distance function approach. The country using the least energy per unit output, given its mix of outputs and inputs, defines the global production frontier. A country's relative energy efficiency is given by its distance from the frontier—the ratio of its actual energy use to the minimum required energy use, ceteris paribus. Energy efficiency is higher in countries with, inter alia, higher total factor productivity, undervalued currencies, and smaller fossil fuel reserves and it converges over time across countries. Globally, technological change was the most important factor counteracting the energy-use and carbon-emissions increasing effects of economic growth.

  5. Fisher classifier and its probability of error estimation

    Science.gov (United States)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  6. Comparison of experimental and theoretical efficiency of HPGe X-ray detector

    International Nuclear Information System (INIS)

    Mohanty, B.P.; Balouria, P.; Garg, M.L.; Nandi, T.K.; Mittal, V.K.; Govil, I.M.

    2008-01-01

    The low energy high purity germanium (HPGe) detectors are being increasingly used for the quantitative estimation of elements using X-ray spectrometric techniques. The softwares used for quantitative estimation normally evaluate model based efficiency of detector using manufacturer supplied detector physical parameters. The present work shows that the manufacturer supplied detector parameters for low energy HPGe detectors need to be verified by comparing model based efficiency with the experimental ones. This is particularly crucial for detectors with ion implanted P type contacts

  7. Efficient flapping flight of pterosaurs

    Science.gov (United States)

    Strang, Karl Axel

    In the late eighteenth century, humans discovered the first pterosaur fossil remains and have been fascinated by their existence ever since. Pterosaurs exploited their membrane wings in a sophisticated manner for flight control and propulsion, and were likely the most efficient and effective flyers ever to inhabit our planet. The flapping gait is a complex combination of motions that sustains and propels an animal in the air. Because pterosaurs were so large with wingspans up to eleven meters, if they could have sustained flapping flight, they would have had to achieve high propulsive efficiencies. Identifying the wing motions that contribute the most to propulsive efficiency is key to understanding pterosaur flight, and therefore to shedding light on flapping flight in general and the design of efficient ornithopters. This study is based on published results for a very well-preserved specimen of Coloborhynchus robustus, for which the joints are well-known and thoroughly described in the literature. Simplifying assumptions are made to estimate the characteristics that can not be inferred directly from the fossil remains. For a given animal, maximizing efficiency is equivalent to minimizing power at a given thrust and speed. We therefore aim at finding the flapping gait, that is the joint motions, that minimize the required flapping power. The power is computed from the aerodynamic forces created during a given wing motion. We develop an unsteady three-dimensional code based on the vortex-lattice method, which correlates well with published results for unsteady motions of rectangular wings. In the aerodynamic model, the rigid pterosaur wing is defined by the position of the bones. In the aeroelastic model, we add the flexibility of the bones and of the wing membrane. The nonlinear structural behavior of the membrane is reduced to a linear modal decomposition, assuming small deflections about the reference wing geometry. The reference wing geometry is computed for

  8. Nitrogen concentration estimation with hyperspectral LiDAR

    Directory of Open Access Journals (Sweden)

    O. Nevalainen

    2013-10-01

    Full Text Available Agricultural lands have strong impact on global carbon dynamics and nitrogen availability. Monitoring changes in agricultural lands require more efficient and accurate methods. The first prototype of a full waveform hyperspectral Light Detection and Ranging (LiDAR instrument has been developed at the Finnish Geodetic Institute (FGI. The instrument efficiently combines the benefits of passive and active remote sensing sensors. It is able to produce 3D point clouds with spectral information included for every point which offers great potential in the field of remote sensing of environment. This study investigates the performance of the hyperspectral LiDAR instrument in nitrogen estimation. The investigation was conducted by finding vegetation indices sensitive to nitrogen concentration using hyperspectral LiDAR data and validating their performance in nitrogen estimation. The nitrogen estimation was performed by calculating 28 published vegetation indices to ten oat samples grown in different fertilization conditions. Reference data was acquired by laboratory nitrogen concentration analysis. The performance of the indices in nitrogen estimation was determined by linear regression and leave-one-out cross-validation. The results indicate that the hyperspectral LiDAR instrument holds a good capability to estimate plant biochemical parameters such as nitrogen concentration. The instrument holds much potential in various environmental applications and provides a significant improvement to the remote sensing of environment.

  9. Direction-of-Arrival Estimation with Coarray ESPRIT for Coprime Array.

    Science.gov (United States)

    Zhou, Chengwei; Zhou, Jinfang

    2017-08-03

    A coprime array is capable of achieving more degrees-of-freedom for direction-of-arrival (DOA) estimation than a uniform linear array when utilizing the same number of sensors. However, existing algorithms exploiting coprime array usually adopt predefined spatial sampling grids for optimization problem design or include spectrum peak search process for DOA estimation, resulting in the contradiction between estimation performance and computational complexity. To address this problem, we introduce the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) to the coprime coarray domain, and propose a novel coarray ESPRIT-based DOA estimation algorithm to efficiently retrieve the off-grid DOAs. Specifically, the coprime coarray statistics are derived according to the received signals from a coprime array to ensure the degrees-of-freedom (DOF) superiority, where a pair of shift invariant uniform linear subarrays is extracted. The rotational invariance of the signal subspaces corresponding to the underlying subarrays is then investigated based on the coprime coarray covariance matrix, and the incorporation of ESPRIT in the coarray domain makes it feasible to formulate the closed-form solution for DOA estimation. Theoretical analyses and simulation results verify the efficiency and the effectiveness of the proposed DOA estimation algorithm.

  10. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  11. A High Efficiency PSOFC/ATS-Gas Turbine Power System

    Energy Technology Data Exchange (ETDEWEB)

    W.L. Lundberg; G.A. Israelson; M.D. Moeckel; S.E. Veyo; R.A. Holmes; P.R. Zafred; J.E. King; R.E. Kothmann

    2001-02-01

    A study is described in which the conceptual design of a hybrid power system integrating a pressurized Siemens Westinghouse solid oxide fuel cell generator and the Mercury{trademark} 50 gas turbine was developed. The Mercury{trademark} 50 was designed by Solar Turbines as part of the US. Department of Energy Advanced Turbine Systems program. The focus of the study was to develop the hybrid power system concept that principally would exhibit an attractively-low cost of electricity (COE). The inherently-high efficiency of the hybrid cycle contributes directly to achieving this objective, and by employing the efficient, power-intensive Mercury{trademark} 50, with its relatively-low installed cost, the higher-cost SOFC generator can be optimally sized such that the minimum-COE objective is achieved. The system cycle is described, major system components are specified, the system installed cost and COE are estimated, and the physical arrangement of the major system components is discussed. Estimates of system power output, efficiency, and emissions at the system design point are also presented. In addition, two bottoming cycle options are described, and estimates of their effects on overall-system performance, cost, and COE are provided.

  12. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  13. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  14. Technical efficiency of selected hospitals in Eastern Ethiopia.

    Science.gov (United States)

    Ali, Murad; Debela, Megersa; Bamud, Tewfik

    2017-12-01

    This study examines the relative technical efficiency of 12 hospitals in Eastern Ethiopia. Using six-year-round panel data for the period between 2007/08 and 2012/13, this study examines the technical efficiency, total factor productivity, and determinants of the technical inefficiency of hospitals. Data envelopment analysis (DEA) and DEA- based Malmquist productivity index used to estimate relative technical efficiency, scale efficiency, and total factor productivity index of hospitals. Tobit model used to examine the determinants of the technical inefficiency of hospitals. The DEA Variable Returns to Scale (VRS) estimate indicated that 6 (50%), 5 (42%), 3 (25%), 3 (25%), 4 (33%), and 3 (25%) of the hospitals were technically inefficient while 9 (75%), 9 (75%), 7 (58%), 7 (58%), 7 (58%) and 8 (67%) of hospitals were scale inefficient between 2007/08 and 2012/13, respectively. On average, Malmquist Total Factor Productivity (MTFP) of the hospitals decreased by 3.6% over the panel period. The Tobit model shows that teaching hospital is less efficiency than other hospitals. The Tobit regression model further shows that medical doctor to total staff ratio, the proportion of outpatient visit to inpatient days, and the proportion of inpatients treated per medical doctor were negatively related with technical inefficiency of hospitals. Hence, policy interventions that help utilize excess capacity of hospitals, increase doctor to other staff ratio, and standardize number of inpatients treated per doctor would contribute to the improvement of the technical efficiency of hospitals.

  15. Propulsive efficiency and non- expert swimmers performance

    Directory of Open Access Journals (Sweden)

    Tiago Barbosa

    2009-12-01

    Full Text Available Propulsive efficiency is one of the most interesting issues for competitive swimming researchers, has it presents significant relationships with the swimmer’s biophysical behavior and his/her performance. Although propulsive efficiency is a variable that has been quite studied in elite swimmers, there is no research on this issue in young and non-expert swimmers. Thus, the aim of this study was to: (i estimate the propulsive efficiency on non-expert swimmers; (ii identify biomechanical and anthropometrical parameters that are associated with propulsive efficiency; (iii identify the association between the propulsive efficiency and swim performance. Twenty-eight non-expert swimmers participated on this study. It was assessed the propulsive efficiency, biomechanical and anthropometrical parameters, as well as, the swim performance. The propulsive efficiency of non-expert swimmers is lower than data reported in the literature to higher competitive levels swimmers and there are no significant differences between boys and girls. It was also noted that several biomechanical and anthropometrical parameters, as well as, the swim performance are associated with the propulsive efficiency.

  16. Estimating Price Elasticity using Market-Level Appliance Data

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-04

    This report provides and update to and expansion upon our 2008 LBNL report “An Analysis of the Price Elasticity of Demand for Appliances,” in which we estimated an average relative price elasticity of -0.34 for major household appliances (Dale and Fujita 2008). Consumer responsiveness to price change is a key component of energy efficiency policy analysis; these policies influence consumer purchases through price both explicitly and implicitly. However, few studies address appliance demand elasticity in the U.S. market and public data sources are generally insufficient for rigorous estimation. Therefore, analysts have relied on a small set of outdated papers focused on limited appliance types, assuming long-term elasticities estimated for other durables (e.g., vehicles) decades ago are applicable to current and future appliance purchasing behavior. We aim to partially rectify this problem in the context of appliance efficiency standards by revisiting our previous analysis, utilizing data released over the last ten years and identifying additional estimates of durable goods price elasticities in the literature. Reviewing the literature, we find the following ranges of market-level price elasticities: -0.14 to -0.42 for appliances; -0.30 to -1.28 for automobiles; -0.47 to -2.55 for other durable goods. Brand price elasticities are substantially higher for these product groups, with most estimates -2.0 or more elastic. Using market-level shipments, sales value, and efficiency level data for 1989-2009, we run various iterations of a log-log regression model, arriving at a recommended range of short run appliance price elasticity between -0.4 and -0.5, with a default value of -0.45.

  17. ECONOMIC AND ACCOUNTING INFORMATION AND STOCK MARKET EFFICIENCY

    Directory of Open Access Journals (Sweden)

    Simona – Florina SĂLIȘTEANU

    2014-05-01

    Full Text Available The purpose of this paper is to explore and to analyse the relations between financial accounting information and stock market efficiency. As we know, accounting contributes to the efficiency of the stock market by producing primordial information for the investors. On the other side, an efficient market facilitates the role of accounting by providing a reliable estimate of the value of many assets that needs to be evaluated. This article examines the importance of the financial accounting information for the efficiency of stock market, and also analyses whether and how the structure, the characteristics and publication of the information, impacts the prices and transactions volumes.

  18. A study on technical efficiency of a DMU (review of literature)

    Science.gov (United States)

    Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Sankar, J. Ravi

    2017-11-01

    In this research paper the concept of technical efficiency (due to Farell) [1] of a decision making unit (DMU) has been introduced and the measure of technical and cost efficiencies are derived. Timmer’s [2] deterministic approach to estimate the Cobb-Douglas production frontier has been proposed. The idea of extension of Timmer’s [2] method to any production frontier which is linear in parameters has been presented here. The estimation of parameters of Cobb-Douglas production frontier by linear programming approach has been discussed in this paper. Mark et al. [3] proposed a non-parametric method to assess efficiency. Nuti et al. [4] investigated the relationships among technical efficiency scores, weighted per capita cost and overall performance Gahe Zing Samuel Yank et al. [5] used Data envelopment analysis to assess technical assessment in banking sectors.

  19. Efficiency of microfinance institutions in sub – Saharan Africa: a ...

    African Journals Online (AJOL)

    This study investigates the cost efficiency of MFIs operating in 10 Sub-Saharan Africa (SSA) countries over the period 2003-2013 and the factors that drive efficiency. The authors considered the Cobb-Douglas stochastic cost frontier model with truncated normal distribution and time variant inefficiency were estimated.

  20. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  1. An Efficient Power Estimation Methodology for Complex RISC Processor-based Platforms

    OpenAIRE

    Rethinagiri , Santhosh Kumar; Ben Atitallah , Rabie; Dekeyser , Jean-Luc; Niar , Smail; Senn , Eric

    2012-01-01

    International audience; In this contribution, we propose an efficient power estima- tion methodology for complex RISC processor-based plat- forms. In this methodology, the Functional Level Power Analysis (FLPA) is used to set up generic power models for the different parts of the system. Then, a simulation framework based on virtual platform is developed to evalu- ate accurately the activities used in the related power mod- els. The combination of the two parts above leads to a het- erogeneou...

  2. Innovation and technical efficiency in Malaysian family manufacturing industries

    OpenAIRE

    Susila Munisamy; Edward Wong Sek Khin; Chia Zi Fon

    2015-01-01

    This study investigates the technical efficiency for each industry in the Malaysian manufacturing sector is estimated by using Data Envelopment Analysis (DEA). In order to pursue a balance of innovation between long-term and short-term performance strategy, we integrate the Balance Scorecard (BSC) approach with DEA. Furthermore, this paper looks at the determinants of efficiency using the Tobit regression model. In measuring the level of firms’ efficiency and innovation, the wood and wood b...

  3. Fractal stock markets: International evidence of dynamical (in)efficiency

    Science.gov (United States)

    Bianchi, Sergio; Frezza, Massimiliano

    2017-07-01

    The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 1/2 . We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.

  4. The energy efficiency of lead selfsputtering

    DEFF Research Database (Denmark)

    Andersen, Hans Henrik

    1968-01-01

    The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics...

  5. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  6. Fort Lewis natural gas and fuel oil energy baseline and efficiency resource assessment

    International Nuclear Information System (INIS)

    Brodrick, J.R.; Daellenbach, K.K.; Parker, G.B.; Richman, E.E.; Secrest, T.J.; Shankle, S.A.

    1993-02-01

    The mission of the US Department of Energy (DOE) Federal Energy Management Program (FEMP) is to lead the improvement of energy efficiency and fuel flexibility within the federal sector. Through the Pacific Northwest Laboratory (PNL), FEMP is developing a fuel-neutral approach for identifying, evaluating, and acquiring all cost-effective energy projects at federal installations; this procedure is entitled the Federal Energy Decision Screening (FEDS) system. Through a cooperative program between FEMP and the Army Forces Command (FORSCOM) for providing technical assistance to FORSCOM installations, PNL has been working with the Fort Lewis Army installation to develop the FEDS procedure. The natural gas and fuel oil assessment contained in this report was preceded with an assessment of electric energy usage that was used to implement a cofunded program between Fort Lewis and Tacoma Public Utilities to improve the efficiency of the Fort's electric-energy-using systems. This report extends the assessment procedure to the systems using natural gas and fuel oil to provide a baseline of consumption and an estimate of the energy-efficiency potential that exists for these two fuel types at Fort Lewis. The baseline is essential to segment the end uses that are targets for broad-based efficiency improvement programs. The estimated fossil-fuel efficiency resources are estimates of the available quantities of conservation for natural gas, fuel oils number-sign 2 and number-sign 6, and fuel-switching opportunities by level of cost-effectiveness. The intent of the baseline and efficiency resource estimates is to identify the major efficiency resource opportunities and not to identify all possible opportunities; however, areas of additional opportunity are noted to encourage further effort

  7. Academic Performance and Burnout: An Efficient Frontier Analysis of Resource Use Efficiency among Employed University Students

    Science.gov (United States)

    Galbraith, Craig S.; Merrill, Gregory B.

    2015-01-01

    We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…

  8. Econometric analysis of economic and environmental efficiency of Dutch dairy farms

    NARCIS (Netherlands)

    Reinhard, S.

    1999-01-01

    The Dutch government aims for competitive and sustainable farms, that use marketable inputs efficiently as well as apply environmentally detrimental variables efficiently in the production process. The objective of this research is to define, to estimate and to evaluate environmental

  9. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  10. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  11. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  12. A Hybrid One-Way ANOVA Approach for the Robust and Efficient Estimation of Differential Gene Expression with Multiple Patterns.

    Directory of Open Access Journals (Sweden)

    Mohammad Manir Hossain Mollah

    Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large

  13. A Methodology for the Estimation of the Wind Generator Economic Efficiency

    Science.gov (United States)

    Zaleskis, G.

    2017-12-01

    Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.

  14. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  15. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  16. The impact of interface bonding efficiency on high-burnup spent nuclear fuel dynamic performance

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Hao, E-mail: jiangh@ornl.gov; Wang, Jy-An John; Wang, Hong

    2016-12-01

    Highlights: • To investigate the impact of interfacial bonding efficiency at pellet-pellet and pellet-clad interfaces of high-burnup (HBU) spent nuclear fuel (SNF) on its dynamic performance. • Flexural rigidity, EI = M/κ, estimated from FEA results were benchmarked with SNF dynamic experimental results, and used to evaluate interface bonding efficiency. • Interface bonding efficiency can significantly dictate the SNF system rigidity and the associated dynamic performance. • With consideration of interface bonding efficiency and fuel cracking, HBU SNF fuel property was estimated with SNF static and dynamic experimental data. - Abstract: Finite element analysis (FEA) was used to investigate the impact of interfacial bonding efficiency at pellet-pellet and pellet-clad interfaces of high-burnup (HBU) spent nuclear fuel (SNF) on system dynamic performance. Bending moments M were applied to FEA model to evaluate the system responses. From bending curvature, κ, flexural rigidity EI can be estimated as EI = M/κ. The FEA simulation results were benchmarked with experimental results from cyclic integrated reversal bending fatigue test (CIRFT) of HBR fuel rods. The consequence of interface debonding between fuel pellets and cladding is a redistribution of the loads carried by the fuel pellets to the clad, which results in a reduction in composite rod system flexural rigidity. Therefore, the interface bonding efficiency at the pellet-pellet and pellet-clad interfaces can significantly dictate the SNF system dynamic performance. With the consideration of interface bonding efficiency, the HBU SNF fuel property was estimated with CIRFT test data.

  17. Efficiency of free-energy calculations of spin lattices by spectral quantum algorithms

    International Nuclear Information System (INIS)

    Master, Cyrus P.; Yamaguchi, Fumiko; Yamamoto, Yoshihisa

    2003-01-01

    Ensemble quantum algorithms are well suited to calculate estimates of the energy spectra for spin-lattice systems. Based on the phase estimation algorithm, these algorithms efficiently estimate discrete Fourier coefficients of the density of states. Their efficiency in calculating the free energy per spin of general spin lattices to bounded error is examined. We find that the number of Fourier components required to bound the error in the free energy due to the broadening of the density of states scales polynomially with the number of spins in the lattice. However, the precision with which the Fourier components must be calculated is found to be an exponential function of the system size

  18. Efficient Simulation of the Outage Probability of Multihop Systems

    KAUST Repository

    Ben Issaid, Chaouki; Alouini, Mohamed-Slim; Tempone, Raul

    2017-01-01

    In this paper, we present an efficient importance sampling estimator for the evaluation of the outage probability of multihop systems with amplify-and-forward channel state-information-assisted. The proposed estimator is endowed with the bounded relative error property. Simulation results show a significant reduction in terms of number of simulation runs compared to naive Monte Carlo.

  19. Efficient Simulation of the Outage Probability of Multihop Systems

    KAUST Repository

    Ben Issaid, Chaouki

    2017-10-23

    In this paper, we present an efficient importance sampling estimator for the evaluation of the outage probability of multihop systems with amplify-and-forward channel state-information-assisted. The proposed estimator is endowed with the bounded relative error property. Simulation results show a significant reduction in terms of number of simulation runs compared to naive Monte Carlo.

  20. Efficient renewable energy scenarios study for Victoria

    International Nuclear Information System (INIS)

    Armstrong, Graham

    1991-01-01

    This study examines the possible evolution of Victorian energy markets over the 1998-2030 period from technical, economic and environmental perspectives. The focus is on the technical and economic potential over the study period for renewable energy and energy efficiency to increase their share of energy markets, through their economic competitiveness with the non-renewables of oil, gas and fossil fulled electricity. The study identifies a range of energy options that have a lower impact on carbon dioxide emissions that current projections for the Victorian energy sector, together with the savings in energy, dollars and carbon dioxide emissions. In addition the macroeconomic implications of the energy paths are estimated. Specifically it examines a scenario (R-efficient renewable) where energy efficiency and renewable energy sources realise their estimated economic potential to displace non-renewable energy over the 1988-2030 period. In addition, a scenario (T-Toronto) is examined where energy markets are pushed somewhat harder, but again on an economic basis, so that what is called the Toronto target of reducing 1988 carbon dioxide (CO 2 ) emissions by 20 per cent by 2005 is attained. It is concluded that over the next forty years there is substantial economic potential in Victoria for significant gains from energy efficiency in all sectors - residential, commercial, industrial and transport - and contributions from renewable energy both in those sectors and in electricity generations. 7 figs., 5 tabs

  1. Efficiency audit for IT-systems of state management strategic objects

    Directory of Open Access Journals (Sweden)

    Abasov V.A.

    2017-06-01

    Full Text Available Hackers’ attacks at the end of 2016 and at the beginning of 2017 р. on governmental information and telecommunication systems, including Ministry of Finance in Ukraine, and State Treasury Department, caused vast delays in budgetary payments. They showed «sensitiveness» and insecurity of governmental institutions for cyber-attacks because of control absence of three main security measures, such as technical limitations for downloading programs, limited use of rights for local administrators, systematical software renewals. International experience shows these security measures of governmental IT-systems have to be the audit subject of state financial control authorities. The base of information technology audit was initiated in the studies of І.К. Drozd, S.V. Іvachnenkova, М.М. Benko, Ju.А. Кuxminskiy, А.V. Мamyshev. Simultaneously, the issue of IT-system state audit was examined in theoretical researches partially because there is no practice of such audit in Ukraine. That is why it is necessary to learn international practice of efficiency audit for IT-systems and world standards for establishments of state management sector. The research allowed to propose the methodology of efficiency audit for IT-systems for state institutions; the methodology provides planning and conducting the main procedures on the base of risk estimation of security threats for information systems. The author determines the peculiarities in security risk management for IT-systems by means of risk estimation of security components of IT-systems while conducting efficiency audit. The author sets the method of descending step-by-step detailing for audit estimation of IT-system risk management efficiency at strategic enterprises belonging to state management sector by means of adaptation of ISSAI standard norms. The paper proposes three possible options of management solution concerning IT-system risk management efficiency on the base of information about the

  2. Unsupervised Learning for Efficient Texture Estimation From Limited Discrete Orientation Data

    Science.gov (United States)

    Niezgoda, Stephen R.; Glover, Jared

    2013-11-01

    The estimation of orientation distribution functions (ODFs) from discrete orientation data, as produced by electron backscatter diffraction or crystal plasticity micromechanical simulations, is typically achieved via techniques such as the Williams-Imhof-Matthies-Vinel (WIMV) algorithm or generalized spherical harmonic expansions, which were originally developed for computing an ODF from pole figures measured by X-ray or neutron diffraction. These techniques rely on ad-hoc methods for choosing parameters, such as smoothing half-width and bandwidth, and for enforcing positivity constraints and appropriate normalization. In general, such approaches provide little or no information-theoretic guarantees as to their optimality in describing the given dataset. In the current study, an unsupervised learning algorithm is proposed which uses a finite mixture of Bingham distributions for the estimation of ODFs from discrete orientation data. The Bingham distribution is an antipodally-symmetric, max-entropy distribution on the unit quaternion hypersphere. The proposed algorithm also introduces a minimum message length criterion, a common tool in information theory for balancing data likelihood with model complexity, to determine the number of components in the Bingham mixture. This criterion leads to ODFs which are less likely to overfit (or underfit) the data, eliminating the need for a priori parameter choices.

  3. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  4. Market conditions affecting energy efficiency investments

    International Nuclear Information System (INIS)

    Seabright, J.

    1996-01-01

    The global energy efficiency market is growing, due in part to energy sector and macroeconomic reforms and increased awareness of the environmental benefits of energy efficiency. Many countries have promoted open, competitive markets, thereby stimulating economic growth. They have reduced or removed subsidies on energy prices, and governments have initiated energy conservation programs that have spurred the wider adoption of energy efficiency technologies. The market outlook for energy efficiency is quite positive. The global market for end-use energy efficiency in the industrial, residential and commercial sectors is now estimated to total more than $34 billion per year. There is still enormous technical potential to implement energy conservation measures and to upgrade to the best available technologies for new investments. For many technologies, energy-efficient designs now represent less than 10--20% of new product sales. Thus, creating favorable market conditions should be a priority. There are a number of actions that can be taken to create favorable market conditions for investing in energy efficiency. Fostering a market-oriented energy sector will lead to energy prices that reflect the true cost of supply. Policy initiatives should address known market failures and should support energy efficiency initiatives. And market transformation for energy efficiency products and services can be facilitated by creating an institutional and legal structure that favors commercially-oriented entities

  5. Monte Carlo next-event point flux estimation for RCP01

    International Nuclear Information System (INIS)

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-01-01

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator's efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn't needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty

  6. Energy efficiency potential study for New Brunswick

    International Nuclear Information System (INIS)

    1992-05-01

    The economic and environmental impacts associated with economically attractive energy savings identified in each of four sectors in New Brunswick are analyzed. The results are derived through a comparison of two potential future scenarios. The frozen efficiency scenario projects what future energy expenditures would be if no new energy efficiency initiatives are introduced. The economic potential scenario projects what those expenditures would be if all economically attractive energy efficiency improvements were gradually implemented over the next 20 years. Energy related emissions are estimated under scenarios with and without fuel switching. The results show, for example, that New Brunswick's energy related CO 2 emissions would be reduced by ca 5 million tonnes in the year 2000 under the economic potential scenario. If fuel switching is adopted, an additional 1 million tonnes of CO 2 emissions could be saved in the year 2000 and 1.6 million tonnes in 2010. The economic impact analysis is restricted to efficiency options only and does not consider fuel switching. Results show the effect of the economic potential scenario on employment, government revenues, and intra-industry distribution of employment gains and losses. The employment impact is estimated as the equivalent of the creation of 2,424 jobs annually over 1991-2010. Government revenues would increase by ca $24 million annually. The industries benefitting most from energy efficiency improvements would be those related to construction, retail trade, finance, real estate, and food/beverages. Industries adversely affected would be the electric power, oil, and coal sectors. 2 figs., 37 tabs

  7. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    Science.gov (United States)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant

  8. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  9. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  10. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  11. Iterative estimation of the background in noisy spectroscopic data

    International Nuclear Information System (INIS)

    Zhu, M.H.; Liu, L.G.; Cheng, Y.S.; Dong, T.K.; You, Z.; Xu, A.A.

    2009-01-01

    In this paper, we present an iterative filtering method to estimate the background of noisy spectroscopic data. The proposed method avoids the calculation of the average full width at half maximum (FWHM) of the whole spectrum and the peak regions, and it can estimate the background efficiently, especially for spectroscopic data with the Compton continuum.

  12. Estimation of efficiency of new local rehabilitation method at the early post-operative period after dental implantation

    Directory of Open Access Journals (Sweden)

    A. V. Pasechnik

    2017-01-01

      Summary Despite of success of dental implantation, there are often complications at the early post-operative period of implant placing associated with wound damage and aseptic inflammation. Purpose of the work is studying clinical efficiency of combined local application of new mucosal gel “Apior” and magnetotherapy at the early post-operative period after dental implantation. Combined local application of the mucosal gel “Apior” and pulsating low-frequency electromagnetic field in the complex medical treatment of patients after conducting an operation of setting dental implants favourably affects the common state of patients and clinical symptoms of inflammation in the area of operating wound. As compared with patients who had traditional anti-inflammatory therapy, the patients treated with local application of apigel and magnetoterapy had decline of edema incidence, of gingival mucosa hyperemia, of discomfort in the area of conducted operation. There occurred more rapid improvement of inflammation painfulness, which correlated with the improvement of hygienic state of oral cavity and promoted to prevention of bacterial content of damaged mucous surfaces. Estimation of microvasculatory blood stream by the method of ultrasonic doppler flowmetry revealed more rapid normalization of volume and linear high systole speed of blood stream in the periimplant tissues in case of use of new complex local rehabilitation method, that testified to the less pronounced inflammation of oral mucosa after the operation. The authors came to conclusion that the local application of the offered method of medical treatment of early post-operative complications of dental implantation reduces terms of renewal of structural-functional integrity of oral mucosa, helps in preventing development of inflammatory complications and strengthening endosseus implant. The inclusion in the treatment management of a new combined method of application of mucosal gel “Apior” and

  13. Comparison of relative efficiency of genomic SSR and EST-SSR markers in estimating genetic diversity in sugarcane.

    Science.gov (United States)

    Parthiban, S; Govindaraj, P; Senthilkumar, S

    2018-03-01

    Twenty-five primer pairs developed from genomic simple sequence repeats (SSR) were compared with 25 expressed sequence tags (EST) SSRs to evaluate the efficiency of these two sets of primers using 59 sugarcane genetic stocks. The mean polymorphism information content (PIC) of genomic SSR was higher (0.72) compared to the PIC value recorded by EST-SSR marker (0.62). The relatively low level of polymorphism in EST-SSR markers may be due to the location of these markers in more conserved and expressed sequences compared to genomic sequences which are spread throughout the genome. Dendrogram based on the genomic SSR and EST-SSR marker data showed differences in grouping of genotypes. A total of 59 sugarcane accessions were grouped into 6 and 4 clusters using genomic SSR and EST-SSR, respectively. The highly efficient genomic SSR could subcluster the genotypes of some of the clusters formed by EST-SSR markers. The difference in dendrogram observed was probably due to the variation in number of markers produced by genomic SSR and EST-SSR and different portion of genome amplified by both the markers. The combined dendrogram (genomic SSR and EST-SSR) more clearly showed the genetic relationship among the sugarcane genotypes by forming four clusters. The mean genetic similarity (GS) value obtained using EST-SSR among 59 sugarcane accessions was 0.70, whereas the mean GS obtained using genomic SSR was 0.63. Although relatively lower level of polymorphism was displayed by the EST-SSR markers, genetic diversity shown by the EST-SSR was found to be promising as they were functional marker. High level of PIC and low genetic similarity values of genomic SSR may be more useful in DNA fingerprinting, selection of true hybrids, identification of variety specific markers and genetic diversity analysis. Identification of diverse parents based on cluster analysis can be effectively done with EST-SSR as the genetic similarity estimates are based on functional attributes related to

  14. Application of laboratory sourceless object counting for the estimation of the neutron dose

    International Nuclear Information System (INIS)

    Cheng Jie; Ning Jing; Zhang Xiaomin; Qu Decheng; Xie Xiangdong; Nan Hongjie

    2011-01-01

    Objective: To estimate the neutron dose using 24 Na energy spectrum analysis method. Methods: Genius-2000 GeomComposer software package was used to calibrate the efficiency of the detector. Results: The detection efficiency of the detector toward γ photon with an energy of 1.368 MeV was quickly found to be 4.05271×10 -3 while the error of the software was 4.0% . The estimated dose value of the neutron irradiation samples was between 1.94 Gy and 2.82 Gy, with an arithmetic mean value of 2.38 Gy. The uncertainty of the dosimetry was about 20.07% . Conclusion: The application of efficiency calibration without a radioactive source of the energy spectrum analysis of the 24 Na contained in human blood with accelerate the estimation process. (authors)

  15. To Estimation of Efficient Usage of Organic Fuel in the Cycle of Steam Power Installations

    Directory of Open Access Journals (Sweden)

    A. P. Nesenchuk

    2013-01-01

    Full Text Available Tendencies of power engineering development in the world were shown in this article. There were carried out the thermodynamic Analysis of efficient usage of different types of fuel. This article shows the obtained result, which reflects that low-calorie fuel (from the point of thermodynamics is more efficient to use at steam power stations then high-energy fuel.

  16. An Empirical Study of Parameter Estimation for Stated Preference Experimental Design

    Directory of Open Access Journals (Sweden)

    Fei Yang

    2014-01-01

    Full Text Available The stated preference experimental design can affect the reliability of the parameters estimation in discrete choice model. Some scholars have proposed some new experimental designs, such as D-efficient, Bayesian D-efficient. But insufficient empirical research has been conducted on the effectiveness of these new designs and there has been little comparative analysis of the new designs against the traditional designs. In this paper, a new metro connecting Chengdu and its satellite cities is taken as the research subject to demonstrate the validity of the D-efficient and Bayesian D-efficient design. Comparisons between these new designs and orthogonal design were made by the fit of model and standard deviation of parameters estimation; then the best model result is obtained to analyze the travel choice behavior. The results indicate that Bayesian D-efficient design works better than D-efficient design. Some of the variables can affect significantly the choice behavior of people, including the waiting time and arrival time. The D-efficient and Bayesian D-efficient design for MNL can acquire reliability result in ML model, but the ML model cannot develop the theory advantages of these two designs. Finally, the metro can handle over 40% passengers flow if the metro will be operated in the future.

  17. Laboratory estimation of net trophic transfer efficiencies of PCB congeners to lake trout (Salvelinus namaycush) from its prey

    Science.gov (United States)

    Madenjian, Charles P.; Rediske, Richard R.; O'Keefe, James P.; David, Solomon R.

    2014-01-01

    A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.

  18. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    Science.gov (United States)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  19. Development of electrical efficiency measurement techniques for 10 kW-class SOFC system: Part I. Measurement of electrical efficiency

    International Nuclear Information System (INIS)

    Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru

    2009-01-01

    Measurement techniques to estimate electrical efficiency of 10 kW-class SOFC systems fueled by town-gas were developed and demonstrated for a system developed by Kansai Electric Power Company and Mitsubishi Materials Corporation under a NEDO project. Higher heating value of the fuel was evaluated with a transportable gas sampling unit and conventional gas chromatography in AIST laboratory with thermal-conductivity and flame-ionization detectors, leading to mean value 44.69 MJ m -3 on a volumetric base for ideal-gas at the standard state (0 deg. C, 101.325 kPa). Mass-flow-rate of the fuel was estimated as 33.04 slm with a mass-flow meter for CH 4 , which was calibrated to correct CH 4 flow-rate and effect of sensitivity change and to obtain conversion factor from CH 4 to town-gas. Without calibration, systematic effect would occur by 8% in flow-rate measurement in the case for CH 4 . Power output was measured with a precision power analyzer, a virtual three phase starpoint adapter, and tri-axial shunts. Power of fundamental wave (60 Hz) was estimated as 10.14 kW, considering from total active power, total higher harmonic distortion factor, and power consumption at the starpoint adapter. The electrical efficiency was presumed to be 41.2% (HHV), though this mean value will be complete only when uncertainty estimation is accompanied

  20. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  1. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  2. Efficient logistic regression designs under an imperfect population identifier.

    Science.gov (United States)

    Albert, Paul S; Liu, Aiyi; Nansel, Tonja

    2014-03-01

    Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.

  3. Efficiency as a Priority of EU Energy Policy

    Directory of Open Access Journals (Sweden)

    Jacek Malko

    2014-06-01

    Full Text Available According to recent conclusions of the European Council it is necessary to stress the need to increase energy efficiency in the EU so as to achieve the objective of saving 20% of the energy consumption compared to projections for 2020 as estimated by the Commission in its Green Paper on Energy Efficiency and to make good use of their National Energy Efficiency Actions Plans for this purpose (i.e. Second NEEAPs from 30 June 2011. It should improve the EU’s industrial competiveness with a potential for creating substantial benefits for households, business and public authorities.

  4. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  5. Living up to expectations: Estimating direct and indirect rebound effects for UK households

    International Nuclear Information System (INIS)

    Chitnis, Mona; Sorrell, Steve

    2015-01-01

    This study estimates the combined direct and indirect rebound effects from various types of energy efficiency improvement by UK households. In contrast to most studies of this topic, we base our estimates on cross-price elasticities and therefore capture both the income and substitution effects of energy efficiency improvements. Our approach involves estimating a household demand model to obtain price and expenditure elasticities of different goods and services, utilising a multiregional input–output model to estimate the GHG emission intensities of those goods and services, combining the two to estimate direct and indirect rebound effects, and decomposing those effects to reveal the relative contribution of different mechanisms and commodities. We estimate that the total rebound effects are 41% for measures that improve the efficiency of domestic gas use, 48% for electricity use and 78% for vehicle fuel use. The primary source of this rebound is increased consumption of the cheaper energy service (i.e. direct rebound) and this is primarily driven by substitution effects. Our results suggest that the neglect of substitution effects may have led prior research to underestimate the total rebound effect. However, we provide a number of caveats to this conclusion, as well as indicating priorities for future research.

  6. Production and efficiency analysis with R

    CERN Document Server

    Behr, Andreas

    2015-01-01

    This textbook introduces essential topics and techniques in production and efficiency analysis and shows how to apply these methods using the statistical software R. Numerous small simulations lead to a deeper understanding of random processes assumed in the models and of the behavior of estimation techniques. Step-by-step programming provides an understanding of advanced approaches such as stochastic frontier analysis and stochastic data envelopment analysis. The text is intended for master students interested in empirical production and efficiency analysis. Readers are assumed to have a general background in production economics and econometrics, typically taught in introductory microeconomics and econometrics courses.

  7. Guidelines to indirectly measure and enhance detection efficiency of stationary PIT tag interrogation systems in streams

    Science.gov (United States)

    Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  8. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    Science.gov (United States)

    Houpt, Joseph W; Bittner, Jennifer L

    2018-05-10

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Tail index and quantile estimation with very high frequency data

    NARCIS (Netherlands)

    J. Daníelsson (Jón); C.G. de Vries (Casper)

    1997-01-01

    textabstractA precise estimation of the tail shape of forex returns is of critical importance for proper risk assessment. We improve upon the efficiency of conventional estimators that rely on a first order expansion of the tail shape, by using the second order expansion. Here we advocate a moments

  10. Analysis of energy end-use efficiency policy in Spain

    International Nuclear Information System (INIS)

    Collado, Rocío Román; Díaz, María Teresa Sanz

    2017-01-01

    The implementation of saving measures and energy efficiency entails the need to evaluate achievements in terms of energy saving and spending. This paper aims at analysing the effectiveness and economic efficiency of energy saving measures implemented in the Energy Savings and Efficiency Action Plan (2008–2012) (EAP4+) in Spain for 2010. The lack of assessment related to energy savings achieved and public spending allocated by the EAP4+ justifies the need of this analysis. The results show that the transport and building sectors seem to be the most important, from the energy efficiency perspective. Although they did not reach the direct energy savings that were expected, there is scope for reduction with the appropriate energy measures. For the effectiveness indicator, the best performance are achieved by public service, agricultural and fisheries and building sectors, while in terms of energy efficiency per monetary unit, the best results are achieved by transport, industry and agriculture sectors. Authors conclude that it is necessary that central, regional and local administrations will get involved, in order to get better estimates of the energy savings achieved and thus to affect the design of future energy efficiency measures at the lowest possible cost to the citizens. - Highlights: • Energy end-use efficiency policy is analysed in terms of energy savings and spending. • The energy savings achieved by some measures are not always provided. • The total energy savings achieved by transport and building sectors are large. • Different levels of administration should get involved in estimating energy savings.

  11. Genetic background in partitioning of metabolizable energy efficiency in dairy cows.

    Science.gov (United States)

    Mehtiö, T; Negussie, E; Mäntysaari, P; Mäntysaari, E A; Lidauer, M H

    2018-05-01

    The main objective of this study was to assess the genetic differences in metabolizable energy efficiency and efficiency in partitioning metabolizable energy in different pathways: maintenance, milk production, and growth in primiparous dairy cows. Repeatability models for residual energy intake (REI) and metabolizable energy intake (MEI) were compared and the genetic and permanent environmental variations in MEI were partitioned into its energy sinks using random regression models. We proposed 2 new feed efficiency traits: metabolizable energy efficiency (MEE), which is formed by modeling MEI fitting regressions on energy sinks [metabolic body weight (BW 0.75 ), energy-corrected milk, body weight gain, and body weight loss] directly; and partial MEE (pMEE), where the model for MEE is extended with regressions on energy sinks nested within additive genetic and permanent environmental effects. The data used were collected from Luke's experimental farms Rehtijärvi and Minkiö between 1998 and 2014. There were altogether 12,350 weekly MEI records on 495 primiparous Nordic Red dairy cows from wk 2 to 40 of lactation. Heritability estimates for REI and MEE were moderate, 0.33 and 0.26, respectively. The estimate of the residual variance was smaller for MEE than for REI, indicating that analyzing weekly MEI observations simultaneously with energy sinks is preferable. Model validation based on Akaike's information criterion showed that pMEE models fitted the data even better and also resulted in smaller residual variance estimates. However, models that included random regression on BW 0.75 converged slowly. The resulting genetic standard deviation estimate from the pMEE coefficient for milk production was 0.75 MJ of MEI/kg of energy-corrected milk. The derived partial heritabilities for energy efficiency in maintenance, milk production, and growth were 0.02, 0.06, and 0.04, respectively, indicating that some genetic variation may exist in the efficiency of using

  12. Measuring improvement in energy efficiency of the US cement industry with the ENERGY STAR Energy Performance Indicator

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, G.; Zhang, G. [Department of Economics, Duke University, Box 90097, Durham, NC 27708 (United States)

    2013-02-15

    The lack of a system for benchmarking industrial plant energy efficiency represents a major obstacle to improving efficiency. While estimates are sometimes available for specific technologies, the efficiency of one plant versus another could only be captured by benchmarking the energy efficiency of the whole plant and not by looking at its components. This paper presents an approach used by ENERGY STAR to implement manufacturing plant energy benchmarking for the cement industry. Using plant-level data and statistical analysis, we control for factors that influence energy use that are not efficiency, per se. What remains is an estimate of the distribution of energy use that is not accounted for by these factors, i.e., intra-plant energy efficiency. By comparing two separate analyses conducted at different points in time, we can see how this distribution has changed. While aggregate data can be used to estimate an average rate of improvement in terms of total industry energy use and production, such an estimate would be misleading as it may give the impression that all plants have made the same improvements. The picture that emerges from our plant-level statistical analysis is more subtle; the most energy-intensive plants have closed or been completely replaced and poor performing plants have made efficiency gains, reducing the gap between themselves and the top performers, whom have changed only slightly. Our estimate is a 13 % change in total source energy, equivalent to an annual reduction of 5.4 billion/kg of energy-related carbon dioxide emissions.

  13. Application of the thermal efficiency analysis software 'EgWin' at existing power plants

    International Nuclear Information System (INIS)

    Koda, E.; Takahashi, T.; Nakao, Y.

    2008-01-01

    'EgWin' is the general purpose software to analyze a thermal efficiency of power system developed in CRIEPI. This software has been used to analyze the existing power generation unit of 30 or more, and the effectiveness has been confirmed. In thermal power plants, it was used for the clarification of the thermal efficiency decrease factor and the quantitative estimation of the influence that each factor gave to the thermal efficiency of the plant. Also it was used for the quantitative estimation of the effect by the operating condition change and the facility remodeling in thermal power, atomic energy, and geothermal power plants. (author)

  14. Performance of the Life Insurance Industry Under Pressure : Efficiency, Competition, and Consolidation

    NARCIS (Netherlands)

    Bikker, Jacob A.

    2016-01-01

    This article investigates efficiency and competition in the Dutch life insurance market by estimating unused scale economies and measuring efficiency-market share dynamics during 1995-2010. Large unused scale economies exist for small- and medium-sized life insurers, indicating that further

  15. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  16. Energy efficiency in the British housing stock: Energy demand and the Homes Energy Efficiency Database

    International Nuclear Information System (INIS)

    Hamilton, Ian G.; Steadman, Philip J.; Bruhns, Harry; Summerfield, Alex J.; Lowe, Robert

    2013-01-01

    The UK Government has unveiled an ambitious retrofit programme that seeks significant improvement to the energy efficiency of the housing stock. High quality data on the energy efficiency of buildings and their related energy demand is critical to supporting and targeting investment in energy efficiency. Using existing home improvement programmes over the past 15 years, the UK Government has brought together data on energy efficiency retrofits in approximately 13 million homes into the Homes Energy Efficiency Database (HEED), along with annual metered gas and electricity use for the period of 2004–2007. This paper describes the HEED sample and assesses its representativeness in terms of dwelling characteristics, the energy demand of different energy performance levels using linked gas and electricity meter data, along with an analysis of the impact retrofit measures has on energy demand. Energy savings are shown to be associated with the installation of loft and cavity insulation, and glazing and boiler replacement. The analysis illustrates this source of ‘in-action’ data can be used to provide empirical estimates of impacts of energy efficiency retrofit on energy demand and provides a source of empirical data from which to support the development of national housing energy efficiency retrofit policies. - Highlights: • The energy efficiency level for 50% of the British housing stock is described. • Energy demand is influenced by size and age and energy performance. • Housing retrofits (e.g. cavity insulation, glazing and boiler replacements) save energy. • Historic differences in energy performance show persistent long-term energy savings

  17. Is the Langevin phase equation an efficient model for oscillating neurons?

    Science.gov (United States)

    Ota, Keisuke; Tsunoda, Takamasa; Omori, Toshiaki; Watanabe, Shigeo; Miyakawa, Hiroyoshi; Okada, Masato; Aonishi, Toru

    2009-12-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  18. Is the Langevin phase equation an efficient model for oscillating neurons?

    International Nuclear Information System (INIS)

    Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi

    2009-01-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  19. Using energy efficiently

    International Nuclear Information System (INIS)

    Nipkow, J.; Brunner, C. U.

    2005-01-01

    This comprehensive article discusses the perspectives for reducing electricity consumption in Switzerland. The increase in consumption is discussed that has occurred in spite of the efforts of the Swiss national energy programmes 'Energy 2000' and 'SwissEnergy'. The fact that energy consumption is still on the increase although efficient and economically-viable technology is available is commented on. The authors are of the opinion that the market alone cannot provide a complete solution and that national and international efforts are needed to remedy things. In particular, the external costs that are often not included when estimating costs are stressed. Several technical options available, such as the use of fluorescent lighting, LCD monitors and efficient electric motors, are looked at as are other technologies quoted as being a means of reducing power consumption. Ways of reducing stand-by losses and system optimisation are looked at as are various scenarios for further development and measures that can be implemented in order to reduce power consumption

  20. Cost Efficiency in Public Higher Education.

    Science.gov (United States)

    Robst, John

    This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…

  1. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    Science.gov (United States)

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  2. Quantifying dilution caused by execution efficiency

    Directory of Open Access Journals (Sweden)

    Taís Renata Câmara

    Full Text Available Abstract In open pit mining, dilution is not always a factor systematically analyzed and calculated. Often it is only an adjusted number, for example, calculated or even empirically determined for a certain operational condition perpetuating along time in the form of a constant applied to calculating reserves or mine planning in attendance of audit requirements. Dilution and loss are factors that should be always considered for tonnage and grade estimates. These factors are always associated and can be determined considering several particularities of the deposit and the operation itself. In this study, a methodology was determined to identify blocks adjacent to the blocks previously planned to be mined. Thus, it is possible to estimate the dilution caused by poor operating efficiency, taking into account the inability of the equipment to perfectly remove each block, respecting its limits. Mining dilution is defined as the incorporation of waste material to ore due to the operational incapacity to efficiently separate the materials during the mining process, considering the physical processes, and the operating and geometric configurations of the mining with the equipment available.

  3. An efficient approach to node localisation and tracking in wireless sensor networks

    CSIR Research Space (South Africa)

    Mwila, MK

    2014-12-01

    Full Text Available and efficient localisation method that makes use of an improved RSSI distance estimation model by including the antenna radiation pattern as well as nodes orientations is presented. Mathematical models for distance estimation, cost function and gradient of cost...

  4. Energy efficiency analysis and implementation of AES on an FPGA

    Science.gov (United States)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher

  5. Estimating Profit Efficiency of Artisanal Fishing in the Pru District of the Brong-Ahafo Region, Ghana

    Directory of Open Access Journals (Sweden)

    Edinam Dope Setsoafia

    2017-01-01

    Full Text Available This study evaluated the profit efficiency of artisanal fishing in the Pru District of Ghana by explicitly computing profit efficiency level, identifying the sources of profit inefficiency, and examining the constraints of artisanal fisheries. Cross-sectional data was obtained from 120 small-scale fishing households using semistructured questionnaire. The stochastic profit frontier model was used to compute profit efficiency level and identify the determinants of profit inefficiency while Garrett ranking technique was used to rank the constraints. The average profit efficiency level was 81.66% which implies that about 82% of the prospective maximum profit was gained due to production efficiency. That is, only 18% of the potential profit was lost due to the fishers’ inefficiency. Also, the age of the household head and household size increase the inefficiency level while experience in artisanal fishing tends to decrease the inefficiency level. From the Garrett ranking, access to credit facility to fully operate the small-scale fishing business was ranked as the most pressing issue followed by unstable prices while perishability was ranked last among the constraints. The study, therefore, recommends that group formation should be encouraged to enable easy access to loans and contract sales to boost profitability.

  6. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  7. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  8. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    Directory of Open Access Journals (Sweden)

    Chao Li

    2017-12-01

    Full Text Available In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC, the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance.

  9. Hydraulic efficiency of a Rushton turbine impeller

    Science.gov (United States)

    Chara, Z.; Kysela, B.; Fort, I.

    2017-07-01

    Based on CFD simulations hydraulic efficiency of a standard Rushton turbine impeller in a baffled tank was determined at a Reynolds number of ReM=33330. Instantaneous values of pressure and velocity components were used to draw up the macroscopic balance of the mechanical energy. It was shown that the hydraulic efficiency of the Rushton turbine impeller (energy dissipated in a bulk volume) is about 57%. Using this result we estimated a length scale in a non-dimensional equation of kinetic energy dissipation rate in the bulk volume as L=D/2.62.

  10. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  11. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo

    2014-01-01

    that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper

  12. A constrained polynomial regression procedure for estimating the local False Discovery Rate

    Directory of Open Access Journals (Sweden)

    Broët Philippe

    2007-06-01

    Full Text Available Abstract Background In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR, which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR for subsets of genes. Results In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets. Conclusion A novel and efficient procedure for estimating lFDR was developed and evaluated.

  13. Study on a method for estimating fuel consumption in a seaway

    DEFF Research Database (Denmark)

    Iseki, Toshio; Nielsen, Ulrik Dam

    2013-01-01

    direction has a great influence on the main engine horse power and fuel consumption, and also shows a possibility of fuel efficiency prediction. In order to develop an eco-friendly navigation support system, results of Bayesian wave estimation are applied to fuel efficiency prediction. The Bayesian method...

  14. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  15. Estimating the efficiency from Brazilian banks: a bootstrapped Data Envelopment Analysis (DEA

    Directory of Open Access Journals (Sweden)

    Ana Elisa Périco

    2016-01-01

    Full Text Available Abstract The Brazilian banking sector went through several changes in its structure over the past few years. Such changes are related to fusions and acquisitions, as well as the largest market opening to foreign banks. The objective of this paper is to analyze, by applying the bootstrap DEA, the efficiency of banks in Brazil in 2010-2013. The methodology was applied to 30 largest banking organizations in a financial intermediation approach. In that model, the resources entering a bank in the form of deposits and total assets are classified as inputs and besides these manual labor is also considered as a resource capable of generating results. For the output variable, credit operations represent the most appropriate alternative, considering the role of the bank as a financial intermediary. In this work, the matter of the best classification among retail banks and banks specialized in credit has little relevance. The low relevance in this type of comparison is a result of analysis by segments (segments were analyzed separately. The results presented here point to an average level of efficiency for the large Brazilian banks in the period. This scenario requires efforts to reduce expenses but also to increase revenues.

  16. Indicators System Creation For The Energy Efficiency Benchmarking Of Municipal Power System Facilities

    Directory of Open Access Journals (Sweden)

    Davydenko L.V.

    2015-04-01

    Full Text Available The issues of the dataware of the comparative analysis procedure (benchmarking for municipal power system facilities energy efficiency level estimation with a view of the hierarchical structure of the heat supply system are considered. The aim of the paper is the system of indicators formation for characterizing the efficiency of energy usage as on objects on lowest so on highest levels of power systems, proceeding from features of their functioning. Benchmarking methodology allows carrying out the estimation of energy efficiency level on the base of a plurality of parameters without their generalization in one indicator, but requires ensuring their comparability. Using the methodology of available statistical information that did not require deep specification and additional inspection structuring objectives and tasks of energy efficiency estimation problem has been proposed for ensuring the opportunity of benchmarking procedure implementation. This makes it possible to form the subset of indicators that ensure enough specification of the object of study, taking into account the degree of abstraction for every hierarchical level or sub problem. For a comparative analysis of energy using efficiency in municipal power systems at the highest levels of the hierarchy a plurality of indicators of the energy efficiency has been formed. Indicators have been determined with consideration of the structural elements of heat supply systems, but allowing taking into account the efficiency of the initial state of the objects, their functioning, and the questions of energy resources accounting organization. Usage of the proposed indicators provides implementation of energy using efficiency monitoring in the municipal power system and allows getting complete overview of the problem.

  17. The Multiple Benefits of Measures to Improve Energy Efficiency

    DEFF Research Database (Denmark)

    Puig, Daniel; Farrell, Timothy Clifford

    Understanding the barriers to, and enablers for, energy efficiency requires targeted information and analysis. This report is a summary of four detailed studies providing new insights on how to promote efficiency in selected priority areas. It complements initiatives such as the so-called energy...... efficiency accelerators, which seek to increase the uptake of selected technologies, as well as the work of many other institutions committed to improving energy efficiency. The modelling estimates and the case studies presented in this report illustrate that, while significant progress has already been...... achieved, the case for accelerating energy efficiency action is strong. Key highlights include: • At the global level, energy efficiency improvements would account for between 2.6 and 3.3 Gt CO2e of the reductions in 2030, equivalent to between 23 and 26 percent of the overall reductions achieved...

  18. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  19. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Mü ller, Ursula U.; Keilegom, Ingrid Van; Chatterjee, Nilanjan

    2012-01-01

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.

  20. Robust estimation for homoscedastic regression in the secondary analysis of case-control data

    KAUST Repository

    Wei, Jiawei

    2012-12-04

    Primary analysis of case-control studies focuses on the relationship between disease D and a set of covariates of interest (Y, X). A secondary application of the case-control study, which is often invoked in modern genetic epidemiologic association studies, is to investigate the interrelationship between the covariates themselves. The task is complicated owing to the case-control sampling, where the regression of Y on X is different from what it is in the population. Previous work has assumed a parametric distribution for Y given X and derived semiparametric efficient estimation and inference without any distributional assumptions about X. We take up the issue of estimation of a regression function when Y given X follows a homoscedastic regression model, but otherwise the distribution of Y is unspecified. The semiparametric efficient approaches can be used to construct semiparametric efficient estimates, but they suffer from a lack of robustness to the assumed model for Y given X. We take an entirely different approach. We show how to estimate the regression parameters consistently even if the assumed model for Y given X is incorrect, and thus the estimates are model robust. For this we make the assumption that the disease rate is known or well estimated. The assumption can be dropped when the disease is rare, which is typically so for most case-control studies, and the estimation algorithm simplifies. Simulations and empirical examples are used to illustrate the approach.