Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
On Asymptotically Efficient Estimation in Semiparametric Models
Schick, Anton
1986-01-01
A general method for the construction of asymptotically efficient estimates in semiparametric models is presented. It improves and modifies Bickel's (1982) construction of adaptive estimates and obtains asymptotically efficient estimates under conditions weaker than those in Bickel.
Econometric Analysis on Efficiency of Estimator
Khoshnevisan, M.; Kaymram, F.; Singh, Housila P.; Singh, Rajesh; Smarandache, Florentin
2003-01-01
This paper investigates the efficiency of an alternative to ratio estimator under the super population model with uncorrelated errors and a gamma-distributed auxiliary variable. Comparisons with usual ratio and unbiased estimators are also made.
Management systems efficiency estimation in tourism organizations
Alexandra I. Mikheyeva
2011-01-01
The article is concerned with management systems efficiency estimation in tourism organizations, examines effective management systems requirements and characteristics in tourism organizations and takes into account principles of management systems formation.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
to performing cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without...... a significant loss in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental...... results indicate that estimation errors can be greatly reduced, leading to orders of magnitude more efficient query execution plans in many cases. Optimization time is kept in the range of tens of milliseconds, making this a practical approach for industrial-strength query optimizers....
Efficient estimation of price adjustment coefficients
Lyhagen, Johan
1999-01-01
The price adjustment coefficient model of Amihud and Mendelson (1987) is shown to be suitable for estimation by the Kalman filter. A techique that, under some commonly used conditions, is asymptotically efficient. By Monte Carlo simulations it is shown that both bias and mean squared error are much smaler compared to the estimator proposed by Damodaran and Lim (1991) and Damodaran (1993). A test for the adeqacy of the model is also proposed. Using data from four minor, the nordic countries ex...
An efficient estimator for Gibbs random fields
Czech Academy of Sciences Publication Activity Database
Janžura, Martin
2014-01-01
Roč. 50, č. 6 (2014), s. 883-895. ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf
Efficient uncertainty analysis using optimal statistical estimator
International Nuclear Information System (INIS)
When performing best estimate calculations uncertainty needs to be quantified. In the world different uncertainty methods were developed for uncertainty evaluation. In the present work an optimal statistical estimator algorithm was adapted, extended and used for response surface generation. The objectives of the study was to demonstrate its applicability for uncertainty evaluation of single value or continuous valued parameters. The results showed that optimal statistical estimator is efficient in predicting lower and upper uncertainty bounds. This makes possibility to apply CsA method for uncertainty evaluation of any kind of transient.(author)
Holzegel, Gustav
2016-01-01
We generalize our unique continuation results recently established for a class of linear and nonlinear wave equations $\\Box_g \\phi + \\sigma \\phi = \\mathcal{G} ( \\phi, \\partial \\phi )$ on asymptotically anti-de Sitter (aAdS) spacetimes to aAdS spacetimes admitting non-static boundary metrics. The new Carleman estimates established in this setting constitute an essential ingredient in proving unique continuation results for the full nonlinear Einstein equations, which will be addressed in forthcoming papers. Key to the proof is a new geometrically adapted construction of foliations of pseudoconvex hypersurfaces near the conformal boundary.
Higher Efficiency of Motion Estimation Methods
Directory of Open Access Journals (Sweden)
J. Gamec
2004-12-01
Full Text Available This paper presents a new motion estimation algorithm to improve theperformance of the existing searching algorithms at a relative lowcomputational cost. We try to amend the incorrect and/or inaccurateestimate of motion with higher precision by using adaptive weightedmedian filtering and its modifications. The median filter iswell-known. A more general filter, called the Adaptively WeightedMedian Filter (AWM, of which the median filter is a special case, isdescribed. The submitted modifications conditionally use the AWM andfull search algorithm (FSA. Simulation results show that the proposedtechnique can efficiently improve the motion estimation performance.
Optimal estimator for assessing landslide model efficiency
Directory of Open Access Journals (Sweden)
J. C. Huang
2006-06-01
Full Text Available The often-used success rate (SR in measuring cell-based landslide model efficiency is based on the ratio of successfully predicted unstable cells over total actual landslide sites without considering the performance in predicting stable cells. We proposed a modified SR (MSR, in which we include the performance of stable cell prediction. The goal and virtue of MSR is to avoid over-prediction while upholding stable sensitivity throughout all simulated cases. Landslide susceptibility maps (a total of 3969 cases with full range of performance (from worse to perfect in stable and unstable cell predictions are created and used to probe how estimators respond to model results in calculating efficiency. The kappa method used for satellite image analysis is drawn for comparison. Results indicate that kappa is too stern for landslide modeling giving very low efficiency values in 90% simulated cases. The old SR tends to give high model efficiency under certain conditions yet with significant over-prediction. To examine the capability of MSR and the differences between SR and MSR as performance indicator, we applied the SHALSTAB model onto a mountainous watershed in Taiwan. Despite the fact the best model result deduced by SR projects 120 hits over 131 actual landslide sites, this high efficiency is only obtained when unstable cells cover an incredibly high percentage (75% of the entire watershed. By contrast, the best simulation indicated by MSR projects 83 hits over 131 actual landslide sites while unstable cells only cover 16% of the studied watershed.
Efficient estimation of rare-event kinetics
Trendelkamp-Schroer, Benjamin
2014-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the proces...
Efficient Estimation of Rare-Event Kinetics
Trendelkamp-Schroer, Benjamin; Noé, Frank
2016-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively, but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general, as it does not invoke any specific dynamical model and can provide accurate estimates of the rare-event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the process occurs more rapidly than its reverse, making the approach especially attractive for downhill processes such as folding and binding in biomolecules. Our method is implemented in the PyEMMA software.
ON THE UNBIASED ESTIMATOR OF THE EFFICIENT FRONTIER
OLHA BODNAR; TARAS BODNAR
2010-01-01
In the paper, we derive an unbiased estimator of the efficient frontier. It is shown that the suggested estimator corrects the overoptimism of the sample efficient frontier documented in Siegel and Woodgate (2007). Moreover, an exact F-test on the efficient frontier is presented.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over a...
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Bing Liu; Zhen Chen; Xiangdong Liu; Fan Yang
2014-01-01
Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is perfor...
Higher Efficiency of Motion Estimation Methods
J. Gamec; Marchevsky, S.; Gamcova, M.
2004-01-01
This paper presents a new motion estimation algorithm to improve the performance of the existing searching algorithms at a relative low computational cost. We try to amend the incorrect and/or inaccurate estimate of motion with higher precision by using adaptive weighted median filtering and its modifications. The median filter is well-known. A more general filter, called the Adaptively Weighted Median Filter (AWM), of which the median filter is a special case, is described. The submitted mod...
Quantum enhanced estimation of optical detector efficiencies
Directory of Open Access Journals (Sweden)
Barbieri Marco
2016-01-01
Full Text Available Quantum mechanics establishes the ultimate limit to the scaling of the precision on any parameter, by identifying optimal probe states and measurements. While this paradigm is, at least in principle, adequate for the metrology of quantum channels involving the estimation of phase and loss parameters, we show that estimating the loss parameters associated with a quantum channel and a realistic quantum detector are fundamentally different. While Fock states are provably optimal for the former, we identify a crossover in the nature of the optimal probe state for estimating detector imperfections as a function of the loss parameter using Fisher information as a benchmark. We provide theoretical results for on-off and homodyne detectors, the most widely used detectors in quantum photonics technologies, when using Fock states and coherent states as probes.
How efficient is estimation with missing data?
DEFF Research Database (Denmark)
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai;
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data...... train a Gaussian mixture model (GMM). We test the trained GMM for two cases, in which test dataset is missing or complete. The results show that CEM is the most efficient method in both cases while MI is the worst performer of the three. PW and CEM proves to be more stable, in particular for higher MDP...
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2016-01-01
parametric methods are much more costly to run. In this paper, we propose an algorithm which significantly reduces the cost of an accurate maximum likelihood-based estimator for real-valued data. The speed up is obtained by exploiting the matrix structure of the problem and by using a recursive solver. Via...
Optimal estimator for assessing landslide model efficiency
Huang, J C; S. J. Kao
2006-01-01
The often-used success rate (SR) in measuring cell-based landslide model efficiency is based on the ratio of successfully predicted unstable cells over total actual landslide sites without considering the performance in predicting stable cells. We proposed a modified SR (MSR), in which we include the performance of stable cell prediction. The goal and virtue of MSR is to avoid over-prediction while upholding stable sensitivity throughout all simulated cases. Landslide susceptibility maps (a t...
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Directory of Open Access Journals (Sweden)
Bing Liu
2014-01-01
Full Text Available Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is performed in four stages. First, the local attitude estimation error system is described by a polytopic linear model. Then the local error attitude estimator is designed with constant coefficients based on the robust H2 filtering algorithm. Subsequently, the attitude predictions and the local error attitude estimations are calculated by a gyro based model and the local error attitude estimator. Finally, the attitude estimations are updated by the predicted attitude with the local error attitude estimations. Since the local error attitude estimator is with constant coefficients, it does not need to calculate the matrix inversion for the filter gain matrix or update the Jacobian matrixes online to obtain the local error attitude estimations. As a result, the computational complexity of the proposed attitude estimator reduces significantly. Simulation results demonstrate the efficiency of the proposed attitude estimation strategy.
Efficient estimation for ergodic diffusions sampled at high frequency
DEFF Research Database (Denmark)
Sørensen, Michael
estimators of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in......A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...... the sense of Godambe and Heyde are shown to be give rate optimal and efficient estimators under weak conditions....
Efficient estimation for high similarities using odd sketches
DEFF Research Database (Denmark)
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector......Estimating set similarity is a central problem in many computer applications. In this paper we introduce the Odd Sketch, a compact binary sketch for estimating the Jaccard similarity of two sets. The exclusive-or of two sketches equals the sketch of the symmetric difference of the two sets. This...... comparison. We present a theoretical analysis of the quality of estimation to guarantee the reliability of Odd Sketch-based estimators. Our experiments confirm this efficiency, and demonstrate the efficiency of Odd Sketches in comparison with $b$-bit minwise hashing schemes on association rule learning and...
PFP total operating efficiency calculation and basis of estimate
International Nuclear Information System (INIS)
The purpose of the Plutonium Finishing Plant (PFP) Total Operating Efficiency Calculation and Basis of Estimate document is to provide the calculated value and basis of estimate for the Total Operating Efficiency (TOE) for the material stabilization operations to be conducted in 234-52 Building. This information will be used to support both the planning and execution of the Plutonium Finishing Plant (PFP) Stabilization and Deactivation Project's (hereafter called the Project) resource-loaded, integrated schedule
Efficient estimation of functionals in nonparametric boundary models
Reiß, Markus; Selk, Leonie
2014-01-01
For nonparametric regression with one-sided errors and a boundary curve model for Poisson point processes we consider the problem of efficient estimation for linear functionals. The minimax optimal rate is obtained by an unbiased estimation method which nevertheless depends on a H\\"older condition or monotonicity assumption for the underlying regression or boundary function. We first construct a simple blockwise estimator and then build up a nonparametric maximum-likelihood approach for expon...
Extrapolated HPGe efficiency estimates based on a single calibration measurement
International Nuclear Information System (INIS)
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V0. Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V0, and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L] ± 1/2 [element-of h - element-of L] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V0, causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
Robust and efficient estimation with weighted composite quantile regression
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Phytoremediation: realistic estimation of modern efficiency and future possibility
International Nuclear Information System (INIS)
Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)
Improving Woody Biomass Estimation Efficiency Using Double Sampling
B. Scott Shouse; Lhotka, John M.; Songlin Fei; Parrott, David L.
2012-01-01
Although double sampling has been shown to be an effective method to estimate timber volume in forest inventories, only a limited body of research has tested the effectiveness of double sampling on forest biomass estimation. From forest biomass inventories collected over 9,683 ha using systematic point sampling, we examined how a double sampling scheme would have affected precision and efficiency in these biomass inventories. Our results indicated that double sample methods would have yielded...
Technical Efficiency Estimation of Rice Production in South Korea
Mohammed, Rezgar; Saghaian, Sayed
2014-01-01
This paper uses stochastic frontier production function to estimates the technical efficiency of rice production in South Korea. Data from eight provinces have been taken between 1993 and 2012. The purpose of this study is to realize whether the agricultural policy made by the Korean government achieved a high technical efficiency in rice production and also to figure out the variables that could decrease a technical inefficiency in rice production. The study showed there is a possibility to ...
Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism
Directory of Open Access Journals (Sweden)
Custer Adrian V
2005-08-01
Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to
Improving Woody Biomass Estimation Efficiency Using Double Sampling
Directory of Open Access Journals (Sweden)
B. Scott Shouse
2012-05-01
Full Text Available Although double sampling has been shown to be an effective method to estimate timber volume in forest inventories, only a limited body of research has tested the effectiveness of double sampling on forest biomass estimation. From forest biomass inventories collected over 9,683 ha using systematic point sampling, we examined how a double sampling scheme would have affected precision and efficiency in these biomass inventories. Our results indicated that double sample methods would have yielded biomass estimations with similar precision as systematic point sampling when the small sample was ≥ 20% of the large sample. When the small to large sample time ratio was 3:1, relative efficiency (a combined measure of time and precision was highest when the small sample was a 30% subsample of the large sample. At a 30% double sample intensity, there was a < 3% deviation from the original percent margin of error and almost half the required time. Results suggest that double sampling can be an efficient tool for natural resource managers to estimate forest biomass.
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
Recent estimates of energy efficiency potential in the USA
Energy Technology Data Exchange (ETDEWEB)
Sreedharan, P. [Energy and Environmental Economics E3, 101 Montgomery Street, 16th Floor, San Francisco, CA 94104 (United States)
2013-08-15
Understanding the potential for reducing energy demand through increased end-use energy efficiency can inform energy and climate policy decisions. However, if potential estimates are vastly different, they engender controversial debates, clouding the usefulness of energy efficiency in shaping a clean energy future. A substantive question thus arises: is there a general consensus on the potential estimates? To answer this question, this paper reviews recent studies of US national and regional energy efficiency potential in buildings and industry. Although these studies are based on differing assumptions, methods, and data, they suggest technically possible reductions of circa 25-40 % in electricity demand and circa 30 % in natural gas demand in 2020 and economic reductions of circa 10-25 % in electricity demand and circa 20 % in natural gas demand in 2020. These estimates imply that electricity growth from 2009 to 2020 ranges from turning US electricity demand growth negative, to reducing it to a growth rate of circa 0.3 %/year (compared to circa 1 % baseline growth)
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Kernel density estimation of a multidimensional efficiency profile
Poluektov, Anton
2014-01-01
Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Concurrent estimation of efficiency, effectiveness and returns to scale
Khodakarami, Mohsen; Shabani, Amir; Farzipoor Saen, Reza
2016-04-01
In recent years, data envelopment analysis (DEA) has been widely used to assess both efficiency and effectiveness. Accurate measurement of overall performance is a product of concurrent consideration of these measures. There are a couple of well-known methods to assess both efficiency and effectiveness. However, some issues can be found in previous methods. The issues include non-linearity problem, paradoxical improvement solutions, efficiency and effectiveness evaluation in two independent environments: dividing an operating unit into two autonomous departments for performance evaluation and problems associated with determining economies of scale. To overcome these issues, this paper aims to develop a series of linear DEA methods to estimate efficiency, effectiveness, and returns to scale of decision-making units (DMUs), simultaneously. This paper considers the departments of a DMU as a united entity to recommend consistent improvements. We first present a model under constant returns to scale (CRS) assumption, and examine its relationship with one of existing network DEA model. We then extend model under variable returns to scale (VRS) condition, and again its relationship with one of existing network DEA models is discussed. Next, we introduce a new integrated two-stage additive model. Finally, an in-depth analysis of returns to scale is provided. A case study demonstrates applicability of the proposed models.
Efficient distance-including integral screening in linear-scaling Møller-Plesset perturbation theory
Maurer, Simon A.; Lambrecht, Daniel S.; Kussmann, Jörg; Ochsenfeld, Christian
2013-01-01
Efficient estimates for the preselection of two-electron integrals in atomic-orbital based Møller-Plesset perturbation theory (AO-MP2) theory are presented, which allow for evaluating the AO-MP2 energy with computational effort that scales linear with molecular size for systems with a significant HOMO-LUMO gap. The estimates are based on our recently introduced QQR approach [S. A. Maurer, D. S. Lambrecht, D. Flaig, and C. Ochsenfeld, J. Chem. Phys. 136, 144107 (2012), 10.1063/1.3693908], which exploits the asympotic decay of the integral values with increasing bra-ket separation as deduced from the multipole expansion and combines this decay behavior with the common Schwarz bound to a tight and simple estimate. We demonstrate on a diverse selection of benchmark systems that our AO-MP2 method in combination with the QQR-type estimates produces reliable results for systems with both localized and delocalized electronic structure, while in the latter case the screening essentially reverts to the common Schwarz screening. For systems with localized electronic structure, our AO-MP2 method shows an early onset of linear scaling as demonstrated on DNA systems. The favorable scaling behavior allows to compute systems with more than 1000 atoms and 10 000 basis functions on a single core that are clearly not accessible with conventional MP2 methods. Furthermore, our AO-MP2 method is particularly suited for parallelization and we present benchmark calculations on a protein-DNA repair complex comprising 2025 atoms and 20 371 basis functions.
Efficient estimation of orthophoto images using visibility restriction
Miura, Hiroyuki; Chikatsu, Hirofumi
2015-05-01
The orthophoto image which is generated using an aerial photo is used in river management, road design and the various fields since the orthophoto has ability to visualize land use with position information. However, the image distortion often occurs in the ortho rectification process. This image distortion is used to estimate manually by the evaluation person with great time. The image distortion should be automatically estimated from the view point of efficiency of the process. With this motive, formed angle V between view vector at exposure point and normal vector at center point of a patch area was focused in this paper. In order to evaluate the relation between image distortion and formed angle V, DMC image which were acquired 2000m height were used and formed angle V for 10m×10m patch was adopted for computing visibility restriction. It was confirmed that image distortion occurred for the patch which show rather than 69 degree of the formed angle V. Therefore, it is concluded that efficient orthophoto visibility restriction is able to perform using the formed angle V as visibility restriction in this paper.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...... simulation study comparing an efficient and a non-efficient estimating function....
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Estimation of Smooth Distributions From Coarsely Grouped Data
DEFF Research Database (Denmark)
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-01-01
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information...... in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most...... applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm...
An efficient algebraic approach to observability analysis in state estimation
Energy Technology Data Exchange (ETDEWEB)
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
Mossel, Elchanan
2010-01-01
We propose a simple and efficient Bayesian model of iterative learning on social networks. This model is efficient in two senses: the process both results in an optimal belief, and can be carried out with modest computational resources for large networks. This result extends Condorcet's Jury Theorem to general social networks, while preserving rationality and computational feasibility. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement of S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard dev...
The estimation of energy efficiency for hybrid refrigeration system
International Nuclear Information System (INIS)
Highlights: ► We present the experimental setup and the model of the hybrid cooling system. ► We examine impact of the operating parameters of the hybrid cooling system on the energy efficiency indicators. ► A comparison of the final and the primary energy use for a combination of the cooling systems is carried out. ► We explain the relationship between the COP and PER values for the analysed cooling systems. -- Abstract: The concept of the air blast-cryogenic freezing method (ABCF) is based on an innovative hybrid refrigeration system with one common cooling space. The hybrid cooling system consists of a vapor compression refrigeration system and a cryogenic refrigeration system. The prototype experimental setup for this method on the laboratory scale is discussed. The application of the results of experimental investigations and the theoretical–empirical model makes it possible to calculate the cooling capacity as well as the final and primary energy use in the hybrid system. The energetic analysis has been carried out for the operating modes of the refrigerating systems for the required temperatures inside the cooling chamber of −5 °C, −10 °C and −15 °C. For the estimation of the energy efficiency the coefficient of performance COP and the primary energy ratio PER for the hybrid refrigeration system are proposed. A comparison of these coefficients for the vapor compression refrigeration and the cryogenic refrigeration system has also been presented.
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
Efficient mental workload estimation using task-independent EEG features
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
International Nuclear Information System (INIS)
Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state...
Estimation of the Asian telecommunication technical efficiencies with panel data
Institute of Scientific and Technical Information of China (English)
YANG Yu-yong; JIA Huai-jing
2007-01-01
This article used panel data and the Stochastic frontier analysis (SFA) model to analyze and compare the technical efficiencies of the telecommunication industry in 28 Asian countries from 1994 to 2003. In conclusion, the technical efficiencies of the Asian countries were found to steadily increase in the past decade. The high-income countries have the highest technical efficiency; however, income is not the only factor that affects the technical efficiency.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Institute of Scientific and Technical Information of China (English)
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Efficient Estimation of Nonlinear Finite Population Parameters Using Nonparametrics
Goga, Camelia; Ruiz-Gazen, Anne
2012-01-01
Currently, the high-precision estimation of nonlinear parameters such as Gini indices, low-income proportions or other measures of inequality is particularly crucial. In the present paper, we propose a general class of estimators for such parameters that take into account univariate auxiliary information assumed to be known for every unit in the population. Through a nonparametric model-assisted approach, we construct a unique system of survey weights that can be used to estimate any nonlinea...
An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks
A.Sandeep Kumar ,Second Author
2012-01-01
wireless mesh networks (WMNs) has been widely used for the new generation wireless network. The capability of self-organization in WMNs reduces the complexity of wireless network deployment and maintenance. So, the perfect estimation of the bandwidth available of the mesh nodes is the required to admission control mechanism which provides QOs confirmation in wireless mesh networks. The bandwidth estimation of schemes do not give clear output. Here we are proposing bandwidth scheme estimation ...
Efficient estimation of breeding values from dense genomic data
Genomic, phenotypic, and pedigree data can be combined to produce estimated breeding values (EBV) with higher reliability. If coefficient matrix Z includes genotypes for many loci and marker effects (u) are normally distributed with equal variance at each, estimation of u by mixed model equations or...
Efficient and Accurate Robustness Estimation for Large Complex Networks
Wandelt, Sebastian
2016-01-01
Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.
Indexes of estimation of efficiency of the use of intellectual resources of industrial enterprises
Directory of Open Access Journals (Sweden)
Audzeichyk Olga
2015-12-01
Full Text Available The article researches the theoretical and practical aspects of estimation of intellectual resources of industrial enterprises and proposes the method of estimation of efficiency of the use of intellectual resources.
Efficient estimation of burst-mode LDA power spectra
DEFF Research Database (Denmark)
Velte, Clara Marika; George, William K
2010-01-01
The estimation of power spectra from LDA data provides signal processing challenges for fluid dynamicists for several reasons. Acquisition is dictated by randomly arriving particles which cause the signal to be highly intermittent. This both creates self-noise and causes the measured velocities to...... increased requirements for good statistical convergence due to the random sampling of the data. In the present work, the theory for estimating burst-mode LDA spectra using residence time weighting is discussed and a practical estimator is derived and applied. A brief discussion on the self-noise in spectra...... and correlations is included, as well as one regarding the statistical convergence of the spectral estimator for random sampling. Further, the basic representation of the burst-mode LDA signal has been revisited due to observations in recent years of particles not following the flow (e.g., particle...
Determination of feed efficiency requires estimates of intake and digestibility of the diet, but they are difficult to measure on pasture. The objective of this research was to determine if plants cuticular alkanes were suitable as markers to estimate intake and diet digestibility of grazing cows wi...
Energy-Efficient Channel Estimation in MIMO Systems
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Transverse correlation: An efficient transverse flow estimator - initial results
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Henze, Lasse; Kortbek, Jacob; Jensen, Jørgen Arendt
vascular hemodynamics, the flow angle cannot easily be found as the angle is temporally and spatially variant. Additionally the precision of traditional methods is severely lowered for high flow angles, and they breakdown for a purely transverse flow. To overcome these problems we propose a new method for...... estimating the transverse velocity component. The method measures the transverse velocity component by estimating the transit time of the blood between two parallel lines beamformed in receive. The method has been investigated using simulations performed with Field II. Using 15 emissions per estimate, a...... at 45 degrees. The method performs stable down to a signal-to-noise ratio of 0 dB, where a standard deviation of 5.5% and a bias of 1.2% is achieved....
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
(microphone) array signal processing, the periodic signals are estimated from spatio-temporal samples regarding to the direction of arrival (DOA) of the signal of interest. In this paper, we consider the problem of pitch and DOA estimation of quasi-periodic audio signals. In real life scenarios, recorded......Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...
Efficient probabilistic planar robot motion estimation given pairs of images
O. Booij; B. Kröse; Z. Zivkovic
2010-01-01
Estimating the relative pose between two camera positions given image point correspondences is a vital task in most view based SLAM and robot navigation approaches. In order to improve the robustness to noise and false point correspondences it is common to incorporate the constraint that the robot m
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According to...
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Efficient Topology Estimation for Large Scale Optical Mapping
Elibol, Armagan; Garcia, Rafael
2013-01-01
Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Directory of Open Access Journals (Sweden)
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Estimation of Nitrogen Fertilizer Use Efficiency in Dryland Agroecosystem
Institute of Scientific and Technical Information of China (English)
LI Shi-qing; LI Sheng-xiu
2001-01-01
A field trial was carried out to study the nitrogen fertilizer recovery by four crops in succession in manurial loess soil in Yangling. The results showed that the nitrogen fertilizer not only had the significant effects on the first crop , but also had longer residual effects, even on the fourth crop. The average apparent nitrogen fertilizer recovery by the first crop was 31.7%, and the accumulative nitrogen recovery by the 4 crops was high as 62.3%, and the latter was double as the former. It is quite clear that the nitrogen fertilizer recovery by the first crop was not reliable for estimating the nitrogen fertilizer unless the residual effect of nitrogen fertilizer was included.
Ionization efficiency estimations for the SPES surface ion source
Manzolaro, M.; Andrighetto, A.; Meneghetti, G.; Rossignoli, M.; Corradetti, S.; Biasetto, L.; Scarpa, D.; Monetti, A.; Carturan, S.; Maggioni, G.
2013-12-01
Ion sources play a crucial role in ISOL (Isotope Separation On Line) facilities determining, with the target production system, the ion beam types available for experiments. In the framework of the SPES (Selective Production of Exotic Species) INFN (Istituto Nazionale di Fisica Nucleare) project, a preliminary study of the alkali metal isotopes ionization process was performed, by means of a surface ion source prototype. In particular, taking into consideration the specific SPES in-target isotope production, Cs and Rb ion beams were produced, using a dedicated test bench at LNL (Laboratori Nazionali di Legnaro). In this work the ionization efficiency test results for the SPES Ta surface ion source prototype are presented and discussed.
Institute of Scientific and Technical Information of China (English)
SONG Bo-wei; GUAN Yun-feng; ZHANG Wen-jun
2005-01-01
This paper deals with channel estimation for orthogonal frequency-division multiplexing (OFDM) systems with transmit diversity. Space time coded OFDM systems, which can provide transmit diversity, require perfect channel estimation to improve communication quality. In actual OFDM systems, training sequences are usually used for channel estimation. The authors propose a training based channel estimation strategy suitable for space time coded OFDM systems. This novel strategy provides enhanced performance, high spectrum efficiency and relatively low computation complexity.
Efficient human pose estimation from single depth images.
Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew
2013-12-01
We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424
Estimate of ecological efficiency for thermal power plants in Brazil
International Nuclear Information System (INIS)
Global warming and the consequent climatic changes that will come as a result of the increase of CO2 concentration in the atmosphere have increased the world's concern regarding reduction of these emissions, mainly in developed countries that pollute the most. Electricity generation in thermal power plants, as well as other industrial activities, such as chemical and petrochemical ones, entail the emission of pollutants that are harmful to humans, animals and plants. The emissions of carbon oxides (CO and CO2) and nitrous oxide (N2O) are directly related to the greenhouse effect. The negative effects of sulfur oxides (SO2 and SO3 named SOx) and nitrogen oxides (NOx) are their contribution to the formation of acid rain and their impacts on human health and on the biota in general. This study intends to evaluate the environmental impacts of the atmospheric pollution resulting from the burning of fossil fuels. This study considers the emissions of CO2, SOx, NOx and PM in an integral way, and they are compared to the international air quality standards that are in force using a parameter called ecological efficiency (ε)
Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models
Institute of Scientific and Technical Information of China (English)
Zhangong ZHOU; Rong JIANG; Weimin QIAN
2011-01-01
The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
Institute of Scientific and Technical Information of China (English)
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements...
Singbo, Alphonse G.; Lansink, Alfons Oude; Emvalomatis, Grigorios
2015-01-01
This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate technic
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
Efficiency assessment of using satellite data for crop area estimation in Ukraine
Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga
2014-06-01
The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.
Marlin, Benjamin
2012-01-01
Standard maximum likelihood estimation cannot be applied to discrete energy-based models in the general case because the computation of exact model probabilities is intractable. Recent research has seen the proposal of several new estimators designed specifically to overcome this intractability, but virtually nothing is known about their theoretical properties. In this paper, we present a generalized estimator that unifies many of the classical and recently proposed estimators. We use results from the standard asymptotic theory for M-estimators to derive a generic expression for the asymptotic covariance matrix of our generalized estimator. We apply these results to study the relative statistical efficiency of classical pseudolikelihood and the recently-proposed ratio matching estimator.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Directory of Open Access Journals (Sweden)
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
Institute of Scientific and Technical Information of China (English)
AkiraOgawa
1999-01-01
A cyclone dust collector is applied in many industries.Especially the axial flow cyclone is the most simple construction and if keeps high reliability for maintenance.On the other hand,the collection efficiency of the cyclone depends not only on the inlet gas velocity but also on the feed particle concentration.The collection efficiency increases with increasing feed particle concentration.However until now the problem of how to estimate the collection efficiency depended on the feed particle concentration is remained except the investigation by Muschelknautz & Brunner[6],Therefore in this paper one of the estimate method for the collection efficiency of the axial flow cyclones is proposed .The application to the geometrically similar type of cyclone of the body diameters D1=30,50,69and 99mm showed in good agreement with the experimental results of the collection efficiencies which were described in detail in the paper by ogawa & Sugiyama[8].
Estimating the net implicit price of energy efficient building codes on U.S. households
International Nuclear Information System (INIS)
Requiring energy efficiency building codes raises housing prices (or the monthly rental equivalent), but theoretically this effect might be fully offset by reductions in household energy expenditures. Whether there is a full compensating differential or how much households are paying implicitly is an empirical question. This study estimates the net implicit price of energy efficient buildings codes, IECC 2003 through IECC 2006, for American households. Using sample data from the American Community Survey 2007, a heteroskedastic seemingly unrelated estimation approach is used to estimate hedonic price (house rent) and energy expenditure models. The value of energy efficiency building codes is capitalized into housing rents, which are estimated to increase by 23.25 percent with the codes. However, the codes provide households a compensating differential of about a 6.47 percent reduction (about $7.71) in monthly energy expenditure. Results indicate that the mean household net implicit price for these codes is about $140.87 per month in 2006 dollars ($163.19 in 2013 dollars). However, this estimated price is shown to vary significantly by region, energy type and the rent gradient. - Highlights: • House rent increases by 23.25 percent with the energy efficiency codes. • Compensating differential of the codes is 6.47 percent. • Net implicit price of effect of energy efficiency building codes is about $140.87
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Institute of Scientific and Technical Information of China (English)
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki
2010-10-15
Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. PMID:20713298
Estimating Efficiency Offset between Two Groups of Decision-Making Units
Czech Academy of Sciences Publication Activity Database
Macek, Karel
Prague: Institute of Information Theory and Automation , 2013 - (Guy, T.; Kárný, M.) ISBN 978-80-903834-8-7. [The 3rd International Workshop on Scalable Decision Making: Uncertainty, Imperfection, Deliberation held in conjunction with ECML/PKDD 2013. Prague (CZ), 23.09.2013-23.09.2013] R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Data Envelopment Analysis * Local Regression * Efficiency Comparison * Interval Estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/AS/macek-estimating efficiency offset between two groups of decision-making units.pdf
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Directory of Open Access Journals (Sweden)
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
An efficient framework for estimation of muscle fiber orientation using ultrasonography
Ling, Shan; CHEN Bin; Zhou, Yongjin; Yang, Wan-Zhang; Zhao, Yu-Qian; Wang, Lei; Zheng, Yong-Ping
2013-01-01
Background Muscle fiber orientation (MFO) is an important parameter related to musculoskeletal functions. The traditional manual method for MFO estimation in sonograms was labor-intensive. The automatic methods proposed in recent years also involved voting procedures which were computationally expensive. Methods In this paper, we proposed a new framework to efficiently estimate MFO in sonograms. We firstly employed Multi-scale Vessel Enhancement Filtering (MVEF) to enhance fascicles in the so...
Directory of Open Access Journals (Sweden)
Sobchak Andrii
2016-02-01
Full Text Available The concept of hyperstability of the cybernetic system is considered in an appendix to the task of estimation of efficiency of virtual productive enterprise functioning. The basic factors, influencing on efficiency of functioning of such enterprise are determined. The article offers the methodology of synthesis of static structure of the system of support of making decision by managers of virtual enterprise, in particular procedure of determination of numerical and high-quality strength of equipment, producible on a virtual enterprise.
Directory of Open Access Journals (Sweden)
B. Bayram
2006-01-01
Full Text Available Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficient as smaller cows
Estimating welfare changes from efficient pricing in public bus transit in India
Deb, Kaushik; Filippini, Massimo
2011-01-01
Three different and feasible pricing strategies for public bus transport in India are developed in a partial equilibrium framework with the objective of improving economic efficiency and ensuring revenue adequacy, namely average cost pricing, marginal cost pricing, and two-part tariffs. These are assessed not only in terms of gains in economic efficiency, but also in changes in travel demand and consumer surplus. The estimated partial equilibrium price is higher in all three pricing reg...
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
The output estimation of a DMU to preserve and improvement of the relative efficiency
Directory of Open Access Journals (Sweden)
Masoud Sanei
2013-10-01
Full Text Available In this paper, we consider the inverse BCC model is used to estimate output levels of the Decision Making Units (DMUs, when the input levels are changed and maintain the efficiency index for all DMUs. Since the inverse BCC problem is in the form of a multi objective nonlinear programming model (MONLP, which is not easy to solve. Therefore, we propose a linear programming model, which gives a Pareto-efficient solution to the inverse BCC problem. So far, we propose a model for improvement of the current efficiency value for considered DMU. Numerical examples are, also, used to illustrate the proposed approaches.
Institute of Scientific and Technical Information of China (English)
CHUNG Warn-ill; CHOI Jun-ho; BAE Hae-young
2004-01-01
Many commercial database systems maintain histograms to summarize the contents of relations and permit the efficient estimation of query result sizes and the access plan cost. In spatial database systems, most spatial query predicates are consisted of topological relationships between spatial objects, and it is very important to estimate the selectivity of those predicates for spatial query optimizer. In this paper, we propose a selectivity estimation scheme for spatial topological predicates based on the multidimensional histogram and the transformation scheme. Proposed scheme applies twopartition strategy on transformed object space to generate spatial histogram and estimates the selectivity of topological predicates based on the topological characteristics of the transformed space. Proposed scheme provides a way for estimating the selectivity without too much memory space usage and additional I/Os in most spatial query optimizers.
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of...
Zhan, Xianyuan; Qian, Xinwu; Ukkusuri, Satish V.
2015-01-01
The era of big data Advance in sensing technologies Development of large scale pervasive computing infrastructure Big data and transportation engineering Reconsider traditional research problems Make infeasible problems feasible In this work Using large scale taxi data from NYC Taxi Ridership analysis Link travel time estimation Taxi system efficiency
EFFICIENCY ESTIMATION OF THERMAL POWER PLANT WITHOUT DIVIDING FUEL CONSUMPTION IN PRODUCT TYPES
A. E. Piir; V. B. Kuntysh
2016-01-01
Combined Thermal Power Plant unit is considered as an exergy generator. Exergy is supplied to consumers by streams of various power carriers. It allows to exclude division of the equipment and fuel consumption in product types and to propose extremely simple methods for estimation of the unit efficiency, calculation of power rate supplied from Thermal Power Plant bus bars and collectors.
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
CLASSIFICATION AND ESTIMATION OF THE EFFICIENCY OF SYSTEMS FOR UNINTERRUPTED ELECTROSUPPLY
Directory of Open Access Journals (Sweden)
Vinnikov A. V.
2015-03-01
Full Text Available In the article we present generalized block diagrams of stationary and transport systems of uninterrupted electrosupply, as well as their maintenance and the basic operating modes providing uninterrupted electrosupply of crucial consumers. Classification of systems of uninterrupted electrosupply is resulted. The basic classification attributes of systems of uninterrupted electrosupply are their assignment for stationary or transport consumers of the electric power, types of used basic, reserve and emergency sources and converters of the electric power. Besides systems of uninterrupted electrosupply can be classified under circuits of connection to consumers of the electric power, their division on a sort of a current (constant, variable, high-frequency, breaks in electrosupply, to type of the switching equipment and so on. For the estimation of the efficiency of systems of uninterrupted electrosupply it is offered to use the following criteria of efficiency: Power and weight-dimension parameters, parameters of reliability, quality of the electric power and cost. Analytical expressions for calculation of parameters of the estimation of efficiency of systems of uninterrupted electrosupply are resulted. The classification of systems of uninterrupted electrosupply suggested in article and modes of their work, and also the basic criteria of an estimation of efficiency will allow raising efficiency of pre-design works on creation of systems with improved customer characteristics with use of modern element base
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
International Nuclear Information System (INIS)
Investing more in renewable energy sources and using this in a rational and efficient way is vital for the sustainable growth of world. Energy efficiency (EE) will play an increasingly important role in future generations. The aim of this work is to estimate how much the PNEf (National Plan for Energy Efficiency) launched by the Brazilian government in 2011 will save over the next 5 years by avoiding the construction of additional power plants, as well as the amount of the CO2 emission. The marginal operating cost is computed for medium term planning of the dispatching of power plants in the hydro-thermal system using Stochastic Dynamic Dual Programming, after incorporating stochastic energy efficiencies into the demand for electricity. We demonstrate that even for a modest improvement in energy efficiency (<1% per year), the savings over the next 5 years range from R$ 237 million in the conservative scenario to R$ 268 million in the optimistic scenario. By comparison the new Belo Monte hydro-electric plant will cost R$ 26 billion to be repaid over a 30 year period (i.e. R$ 867 million in 5 years). So in Brazil EE policies are preferable to building a new power plant. - Highlights: • It is preferable to invest in energy efficiency than to construct a big power plant. • An increase of energy efficiency policies would reduce the operating cost in Brazil. • We have a great reduction of CO2eq emissions by means of energy efficiency policies
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Technical Efficiency of Shrimp Farming in Andhra Pradesh: Estimation and Implications
Directory of Open Access Journals (Sweden)
I. Sivaraman
2015-04-01
Full Text Available Shrimp farming is a key subsector of Indian aquaculture which has seen a remarkable growth in the past decades and has a tremendous potential in future. The present study analyzes the technical efficiency of the shrimp famers of East Godavari district of Andhra Pradesh using the Stochastic Production Frontier Function with the technical inefficiency effects. The estimates mean technical efficiency of the farmers was 93.06 % which means the farmers operate at 6.94 % below the production frontier production. Age, education, experience of the farmers and their membership status in farmers associations and societies were found to have a significant effect on the technical efficiency. The variation in the technical efficiency also confirms the differences in the extent of adoption of the shrimp farming technology among the farmers. Proper technical training opportunities could facilitate the farmers to adopt the improved technologies to increase their farm productivity.
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina;
2012-01-01
Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method by...... measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench and...
International Nuclear Information System (INIS)
In many recent years, the gamma spectrometry using the high purity germanium (HPGe) detector have come into widespread use to determine the activity of radioactive samples. However, the decrease in detector efficiency remarkably influences on the result of measured gamma spectra. In this work, we estimated the decrease in efficiency of the GC1518 HPGe detector made in Canberra Industries, Inc. and located at the Center for HCMC Nuclear Techniques. It was found that the detector efficiency reduces to 8% within 6 years from October 1999 to August 2005. The decrease in efficiency can be explained by increase in the thickness of an inactive germanium layer based on using the Monte Carlo simulation. (author)
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Directory of Open Access Journals (Sweden)
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
Estimating the Effect of Helium and Nitrogen Mixing on Deposition Efficiency in Cold Spray
Ozdemir, Ozan C.; Widener, Christian A.; Helfritch, Dennis; Delfanian, Fereidoon
2016-04-01
Cold spray is a developing technology that is increasingly finding applications for coating of similar and dissimilar metals, repairing geometric tolerance defects to extend expensive part life and additive manufacturing across a variety of industries. Expensive helium is used to accelerate the particles to higher velocities in order to achieve the highest deposit strengths and to spray hard-to-deposit materials. Minimal information is available in the literature studying the effects of He-N2 mixing on coating deposition efficiency, and how He can potentially be conserved by gas mixing. In this study, a one-dimensional simulation method is presented for estimating the deposition efficiency of aluminum coatings, where He-N2 mixture ratios are varied. The simulation estimations are experimentally validated through velocity measurements and single particle impact tests for Al6061.
Petushek, Erich J.; Cokely, Edward T.; Ward, Paul; Durocher, John; Wallace, Sean; Myer, Gregory D
2015-01-01
Simple observational assessment of movement quality (e.g., drop vertical jump biomechanics) is an efficient and low cost method for anterior cruciate ligament (ACL) injury screening and prevention. A recently developed test (see www.ACL-IQ.org) has revealed substantial cross-professional/group differences in visual ACL injury risk estimation skill. Specifically, parents, sport coaches, and to some degree sports medicine physicians, would likely benefit from training or the use of decision sup...
Estimating the efficiency of sustainable development by South African mining companies
Oberholzer, Merwe; Prinsloo, Thomas Frederik
2011-01-01
The purpose of the study was to develop a model, using data envelopment analysis (DEA), in order to estimate the relative efficiency of nine South African listed mining companies in their efforts to convert environmental impact into economic and social gains for shareholders and other stakeholders. The environmental impact factors were used as input variables, that is, greenhouse gas emissions, water usage and energy usage, and the gains for shareholders and other stakeholders were used as ou...
EFFICIENT PU MODE DECISION AND MOTION ESTIMATION FOR H.264/AVC TO HEVC TRANSCODER
Zong-Yi Chen; Jiunn-Tsair Fang; Tsai-Ling Liao; Pao-Chi Chang1
2014-01-01
H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC), had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU) decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs) extracted from H.264/AVC can be ...
Meyers, S.; Marnay, C.; Schumacher, K.; Sathaye, J.
2000-01-01
This paper describes a standardized method for establishing a multi-project baseline for a power system. The method provides an approximation of the generating sources that are expected to operate on the margin in the future for a given electricity system. It is most suitable for small-scale electricity generation and electricity efficiency improvement projects. It allows estimation of one or more carbon emissions factors that represent the emissions avoided by projects, striking a bala...
Directory of Open Access Journals (Sweden)
Feklistova Inessa
2016-02-01
Full Text Available The article presents methodical approach to the estimation of strategic man-agement efficiency of enterprises of the region with the use of cluster analysis, realized by means of the specially worked out application package. The necessity of its application in the analytical work of economic services of the region enterprises has been proved. It will allow to improve the quality of monitoring, and scientifically substantiate strategic administrative decisions
Ambient vibrations efficiency for building dynamic characteristics estimate and seismic evaluation.
Dunand, François
2005-01-01
Ambient vibrations are mechanical low amplitude vibrations generated by human and natural activities. By forcing into vibration engineering structures, these vibrations can be used to estimate the structural dynamic characteristics.The goal of this study is to compare building dynamic characteristics derived from ambient vibrations to those derived from more energetic solicitations (e.g. earthquake). This study validates the efficiency of this method and shows that ambient vibrations results ...
Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space
Neelakantan, Arvind; Shankar, Jeevan; Passos, Alexandre; McCallum, Andrew
2015-01-01
There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination a...
International Nuclear Information System (INIS)
The amount of photosynthetically active radiation (PAR) absorbed by green vegetation is an important determinant of photosynthesis and growth. Methods for the estimation of fractional absorption of PAR (iffPAR) for areas greater than 1 km2 using satellite data are discussed, and are applied to sites in the Sahel that have a sparse herb layer and tree cover of less than 5%. Using harvest measurements of seasonal net production, net production efficiencies are calculated. Variation in estimates of seasonal PAR absorption (APAR) caused by the atmospheric correction method and relationship between surface reflectances and iffPAR is considered. The use of maximum value composites of satellite NDVI to reduce the effect of the atmosphere is shown to produce inaccurate APAR estimates. In this data set, however, atmospheric correction using average optical depths was found to give good approximations of the fully corrected data. A simulation of canopy radiative transfer using the SAIL model was used to derive a relationship between canopy NDVI and iffPAR. Seasonal APAR estimates assuming a 1:1 relationship between iffPAR and NDVI overestimated the SAIL modeled results by up to 260%. The use of a modified 1:1 relationship, where iffPAR was assumed to be linearly related to NDVI scaled between minimum (soil) and maximum (infinite canopy) values, underestimated the SAIL modeled results by up to 35%. Estimated net production efficiencies (ϵn, dry matter per unit APAR) fell in the range 0.12–1.61 g MJ−1 for above ground production, and in the range 0.16–1.88 g MJ−1 for total production. Sites with lower rainfall had reduced efficiencies, probably caused by physiological constraints on photosynthesis during dry conditions. (author)
Technical and Scale Efficiency in Spanish Urban Transport: Estimating with Data Envelopment Analysis
Directory of Open Access Journals (Sweden)
I. M. García Sánchez
2009-01-01
Full Text Available The paper undertakes a comparative efficiency analysis of public bus transport in Spain using Data Envelopment Analysis. A procedure for efficiency evaluation was established with a view to estimating its technical and scale efficiency. Principal components analysis allowed us to reduce a large number of potential measures of supply- and demand-side and quality outputs in three statistical factors assumed in the analysis of the service. A statistical analysis (Tobit regression shows that efficiency levels are negative in relation to the population density and peak-to-base ratio. Nevertheless, efficiency levels are not related to the form of ownership (public versus private. The results obtained for Spanish public transport show that the average pure technical and scale efficiencies are situated at 94.91 and 52.02%, respectively. The excess of resources is around 6%, and the increase in accessibility of the service, one of the principal components summarizing the large number of output measures, is extremely important as a quality parameter in its performance.
Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision
Directory of Open Access Journals (Sweden)
Xiang Wu
2013-01-01
Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.
International Nuclear Information System (INIS)
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented. (paper)
Schildbach, Christian; Ong, Duu Sheng; Hartnagel, Hans; Schmidt, Lorenz-Peter
2016-06-01
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented.
Estimation of coupling efficiency of optical fiber by far-field method
Kataoka, Keiji
2010-09-01
Coupling efficiency to a single-mode optical fiber can be estimated with the field amplitudes at far-field of an incident beam and optical fiber mode. We call it the calculation by far-field method (FFM) in this paper. The coupling efficiency by FFM is formulated including effects of optical aberrations, vignetting of the incident beam, and misalignments of the optical fiber such as defocus, lateral displacements, and angle deviation in arrangement of the fiber. As the results, it is shown the coupling efficiency is proportional to the central intensity of the focused spot, i.e., Strehl intensity of a virtual beam determined by the incident beam and mode of the optical fiber. Using the FFM, a typical optics in which a laser beam is coupled to an optical fiber with a lens of finite numerical aperture (NA) is analyzed for several cases of amplitude distributions of the incident light.
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Directory of Open Access Journals (Sweden)
Wenhua Han
2014-06-01
Full Text Available In this paper, efficient managing particle swarm optimization (EMPSO for high dimension problem is proposed to estimate defect profile from magnetic flux leakage (MFL signal. In the proposed EMPSO, in order to strengthen exchange of information among particles, particle pair model was built. For more efficient searching when facing different landscapes of problems, velocity updating scheme including three velocity updating models was also proposed. In addition, for more chances to search optimum solution out, automatic particle selection for re-initialization was implemented. The optimization results of six benchmark functions show EMPSO performs well when optimizing 100-D problems. The defect simulation results demonstrate that the inversing technique based on EMPSO outperforms the one based on self-learning particle swarm optimizer (SLPSO, and the estimated profiles are still close to the desired profiles with the presence of low noise in MFL signal. The results estimated from real MFL signal by EMPSO-based inversing technique also indicate that the algorithm is capable of providing an accurate solution of the defect profile with real signal. Both the simulation results and experiment results show the computing time of the EMPSO-based inversing technique is reduced by 20%–30% than that of the SLPSO-based inversing technique.
International Nuclear Information System (INIS)
An approach for efficient estimation of passive safety system functional reliability has been developed and applied to a simplified model of the passive residual heat transport system typical of sodium cooled fast reactors to demonstrate the reduction in computational time. The method is based on generating linear approximations to the best estimate computer code, using the technique of automatic reverse differentiation. This technique enables determination of linear approximation to the code in a few runs independent of the number of input variables for each response variable. The likely error due to linear approximation is reduced by augmented sampling through best estimate code in the neighborhood of the linear failure surface but in a sub domain where linear approximation error is relatively more. The efficiency of this new approach is compared with importance sampling MCS which uses the linear approximation near the failure region and with Direct Monte-Carlo Simulation. In the importance sampling MCS, variants employing random sampling with Box-Muller algorithm and Markov Chain algorithm are inter-compared. The significance of the results with respect to system reliability is also discussed.
Efficient PU Mode Decision and Motion Estimation for H.264/AVC to HEVC Transcoder
Directory of Open Access Journals (Sweden)
Zong-Yi Chen
2014-04-01
Full Text Available H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC, had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs extracted from H.264/AVC can be reused to predict the current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to 53% of the encoding time and maintains the rate-distortion (R-D performance for HEVC.
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Directory of Open Access Journals (Sweden)
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Efficient estimation of decay parameters in acoustically coupled-spaces using slice sampling.
Jasa, Tomislav; Xiang, Ning
2009-09-01
Room-acoustic energy decay analysis of acoustically coupled-spaces within the Bayesian framework has proven valuable for architectural acoustics applications. This paper describes an efficient algorithm termed slice sampling Monte Carlo (SSMC) for room-acoustic decay parameter estimation within the Bayesian framework. This work combines the SSMC algorithm and a fast search algorithm in order to efficiently determine decay parameters, their uncertainties, and inter-relationships with a minimum amount of required user tuning and interaction. The large variations in the posterior probability density functions over multidimensional parameter spaces imply that an adaptive exploration algorithm such as SSMC can have advantages over the exiting importance sampling Monte Carlo and Metropolis-Hastings Markov Chain Monte Carlo algorithms. This paper discusses implementation of the SSMC algorithm, its initialization, and convergence using experimental data measured from acoustically coupled-spaces. PMID:19739741
Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method
Directory of Open Access Journals (Sweden)
Gheorghe Grigoras
2013-09-01
Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.
Study of grain alignment efficiency and a distance estimate for small globule CB4
International Nuclear Information System (INIS)
We study the polarization efficiency (defined as the ratio of polarization to extinction) of stars in the background of the small, nearly spherical and isolated Bok globule CB4 to understand the grain alignment process. A decrease in polarization efficiency with an increase in visual extinction is noticed. This suggests that the observed polarization in lines of sight which intercept a Bok globule tends to show dominance of dust grains in the outer layers of the globule. This finding is consistent with the results obtained for other clouds in the past. We determined the distance to the cloud CB4 using near-infrared photometry (2MASS JHKS colors) of moderately obscured stars located at the periphery of the cloud. From the extinction-distance plot, the distance to this cloud is estimated to be (459 ± 85) pc. (paper)
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Directory of Open Access Journals (Sweden)
Robert Aidoo
2012-06-01
Full Text Available The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain.A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency analyses were performed using the field data. There was a long chain of greater than three channels through which yams moved from the producer to the final consumer. Yam marketing was found to be a profitable venture for all the key players in the yam marketing chain.Net marketing margin of about GH¢15.52 (US$9.13 was obtained when the farmer himself sold 100tubers of yams in the market rather than at the farm gate.The net marketing margin obtained by wholesalers was estimated at GH¢27.39 per 100tubers of yam sold, which was equivalent to about 61% of the gross margin obtained.Net marketing margin for retailers was estimated at GH¢15.37, representing 61% of the gross margin obtained.A net marketing margin of GH¢33.91 was obtained for every 100tubers of yam transported across Ghana’s borders by cross-border traders. Generally, the study found out that net marketing margin was highest for cross-border yam traders, followed by wholesalers. Yam marketing activities among retailers, wholesalers and cross-border traders were found to be highly efficient with efficiency ratios in excess of 100%. However, yam marketing among producer-sellers was found to be inefficient with efficiency ratio of about 86%.The study recommended policies and strategies to be adopted by central and local government authorities to address key constraints such as poor road network, limited financial resources, poor storage facilities and high cost of transportation that serve as
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs and...... by reduced outputs, we estimate hyperbolic distance functions that account for reduced technical efficiency both in terms of increased inputs and reduced outputs. We estimate these hyperbolic distance functions as “efficiency effect frontiers” with the Translog functional form and a dynamic...
International Nuclear Information System (INIS)
Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.
The Use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm
International Nuclear Information System (INIS)
Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH232PO4 solution was used to determine the active root-zone of oil palm by applying 32P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15N Ammonium Sulphate was used by adding them at the same amount of 16 g 15N plan-1. This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32P was able to distinguish several root zones: 1.5 m - 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone. (author)
Hardie, L C; Armentano, L E; Shaver, R D; VandeHaar, M J; Spurlock, D M; Yao, C; Bertics, S J; Contreras-Govea, F E; Weigel, K A
2015-04-01
Prior to genomic selection on a trait, a reference population needs to be established to link marker genotypes with phenotypes. For costly and difficult-to-measure traits, international collaboration and sharing of data between disciplines may be necessary. Our aim was to characterize the combining of data from nutrition studies carried out under similar climate and management conditions to estimate genetic parameters for feed efficiency. Furthermore, we postulated that data from the experimental cohorts within these studies can be used to estimate the net energy of lactation (NE(L)) densities of diets, which can provide estimates of energy intakes for use in the calculation of the feed efficiency metric, residual feed intake (RFI), and potentially reduce the effect of variation in energy density of diets. Individual feed intakes and corresponding production and body measurements were obtained from 13 Midwestern nutrition experiments. Two measures of RFI were considered, RFI(Mcal) and RFI(kg), which involved the regression of NE(L )intake (Mcal/d) or dry matter intake (DMI; kg/d) on 3 expenditures: milk energy, energy gained or lost in body weight change, and energy for maintenance. In total, 677 records from 600 lactating cows between 50 and 275 d in milk were used. Cows were divided into 46 cohorts based on dietary or nondietary treatments as dictated by the nutrition experiments. The realized NE(L) densities of the diets (Mcal/kg of DMI) were estimated for each cohort by totaling the average daily energy used in the 3 expenditures for cohort members and dividing by the cohort's total average daily DMI. The NE(L) intake for each cow was then calculated by multiplying her DMI by her cohort's realized energy density. Mean energy density was 1.58 Mcal/kg. Heritability estimates for RFI(kg), and RFI(Mcal) in a single-trait animal model did not differ at 0.04 for both measures. Information about realized energy density could be useful in standardizing intake data from
Efficient quantum state-estimation and feedback on trapped ions using unsharp measurement
Uys, Hermann; Burd, Shaun; Choudhary, Sujit; Goyal, Sandeep; Konrad, Thomas
2013-05-01
Parameter estimation and closed-loop feedback control is ubiquitous in every branch of classical science and engineering. Similar control of quantum systems is usually impossible due to two difficulties. Firstly, quantum phenomena are often short lived due to decoherence, and secondly, attempts to estimate the state of a quantum system through projective measurement, strongly disrupt the dynamics. One alternative is to use unsharp measurements, which are less invasive, but lead to less information gain about the system. A sequence of unsharp measurements, however, carried out in the presence of stronger dynamics, promise real-time state monitoring and control via feedback. Such measurements can be realised by periodically entangling an auxiliary quantum system with the target quantum system, and then carrying out projective measurements on the auxiliary system only. In this talk we discuss an efficient method of estimating both the state of a two-level system and the strength of its coupling to a drive field using unsharp measurement. We then model closed loop feedback control of the two-level dynamics, and explore the level of control over the parameter regime of the model. Finally, we summarize the prospects for implementing the scheme using trapped ions. This work was partially funded by the South African National Research Foundation.
Xu, Huihui; Jiang, Mingyan
2015-07-01
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
Directory of Open Access Journals (Sweden)
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Marigodov, V. K.
2011-01-01
It is shown a possibility of linguistic diagnosis for efficiency and noise immunity estimation of radio communication system. On a basis of direct experts questioning membership functions for one of the system parameters are built.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
International Nuclear Information System (INIS)
The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential sector in the EU-27 member states for the period 1996 to 2009. The estimates for the energy efficiency confirm that the EU residential sector indeed holds a relatively high potential for energy savings from reduced inefficiency. Therefore, despite the common objective to decrease ‘wasteful’ energy consumption, considerable variation in energy efficiency between the EU member states is established. Furthermore, an attempt is made to evaluate the impact of energy-efficiency measures undertaken in the EU residential sector by introducing an additional set of variables into the model and the results suggest that financial incentives and energy performance standards play an important role in promoting energy efficiency improvements, whereas informative measures do not have a significant impact. - Highlights: • The level of energy efficiency of the EU residential sector is estimated. • Considerable potential for energy savings from reduced inefficiency is established. • The impact of introduced energy-efficiency policy measures is also evaluated. • Financial incentives are found to promote energy efficiency improvements. • Energy performance standards also play an important role
Mohammed Abo-Zahhad; Sabah M. Ahmed; Ahmed Zakaria
2012-01-01
This paper presents an efficient electrocardiogram (ECG) signals compression technique based on QRS detection, estimation, and 2D DWT coefficients thresholding. Firstly, the original ECG signal is preprocessed by detecting QRS complex, then the difference between the preprocessed ECG signal and the estimated QRS-complex waveform is estimated. 2D approaches utilize the fact that ECG signals generally show redundancy between adjacent beats and between adjacent samples. The error signal is cut a...
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Flammia, Steven T; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has only a few non-zero eigenvalues, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We exhibit two complementary ways of making this intuition precise. On the one hand, we show that the sample complexity decreases with the rank of the density operator. In other words, fewer copies of the state need to be prepared in order to estimate a low-rank density matrix. On the other hand---and maybe more surprisingly---we prove that unknown low-rank states may be reconstructed using an incomplete set of measurement settings. The method does not require any a priori assumptions about the unknown state, uses only simple Pauli measurements, and can be efficiently and unconditionally certified. Our results extend earlier work on compressed tomography, building on ideas from compressed sensing and matrix completion. Instrumental to the improved analysis are new error bounds for compressed tomography, based on the ...
Indian Academy of Sciences (India)
Nicolle V Sydney; Emygdio La Monteiro-Filho
2011-03-01
Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.
Mökkönen, Harri; Jónsson, Hannes
2016-01-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates i...
An Efficient Algorithm for Contact Angle Estimation in Molecular Dynamics Simulations
Directory of Open Access Journals (Sweden)
Sumith YD
2015-01-01
Full Text Available It is important to find contact angle for a liquid to understand its wetting properties, capillarity and surface interaction energy with a surface. The estimation of contact angle from Non Equilibrium Molecular Dynamics (NEMD, where we need to track the changes in contact angle over a period of time is challenging compared to the estimation from a single image from an experimental measurement. Often such molecular simulations involve finite number of molecules above some metallic or non-metallic substrates and coupled to a thermostat. The identification of profile of the droplet formed during this time will be difficult and computationally expensive to process as an image. In this paper a new algorithm is explained which can efficiently calculate time dependent contact angle from a NEMD simulation just by processing the molecular coordinates. The algorithm implements many simple yet accurate mathematical methods available, especially to remove the vapor molecules and noise data and thereby calculating the contact angle with more accuracy. To further demonstrate the capability of the algorithm a simulation study has been reported which compares the contact angle influence with different thermostats in the Molecular Dynamics (MD simulation of water over platinum surface.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Massimo Filippini; Hunt, Lester C.; Jelena Zoric
2013-01-01
The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential secto...
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner. PMID:27280456
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
International Nuclear Information System (INIS)
In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL)
An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation
Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai
2012-01-01
Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.
Maruta, Kazuki; Iwakuni, Tatsuhiko; Ohta, Atsushi; Arai, Takuto; Shirato, Yushi; Kurosaki, Satoshi; Iizuka, Masataka
2016-01-01
Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G). One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO) can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS) dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI) estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity. PMID:27399715
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Directory of Open Access Journals (Sweden)
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
International Nuclear Information System (INIS)
High yielding crop like maize is very important for countries like Pakistan, which is third cereal crop after wheat and rice. Maize accounts for 4.8 percent of the total cropped area and 4.82 percent of the value of agricultural production. It is grown all over the country but major areas are Sahiwal, Okara and Faisalabad. Chiniot is one of the distinct agroecological domains of central Punjab for the maize cultivation, that's why this district was selected for the study and the technical efficiency of hybrid maize farmers was estimated. The primary data of 120 farmers, 40 farmers from each of the three tehsils of Chiniot were collected in the year 2011. Causes of low yields for various farmers than the others, while using the same input bundle were estimated. The managerial factors causing the inefficiency of production were also measured. The average technical efficiency was estimated to be 91 percent, while it was found to be 94.8, 92.7 and 90.8 for large, medium and small farmers, respectively. Stochastic frontier production model was used to measure technical efficiency. Statistical software Frontier 4.1 was used to analyse the data to generate inferences because the estimates of efficiency were produced as a direct output from package. It was concluded that the efficiency can be enhanced by covering the inefficiency from the environmental variables, farmers personal characteristics and farming conditions. (author)
Brusset, Xavier
2014-01-01
We study the pricing problem between two firms when the manufacturer’s willingness to pay (wtp) for the supplier’s good is not known by the latter. We demonstrate that it is in the interest of the manufacturer to hide this information from the supplier. The precision of the information available to the supplier modifies the rent distribution. The risk of opportunistic behaviour entails a loss of efficiency in the supply chain. The model is extended to the case of a supplier submitting offers ...
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
International Nuclear Information System (INIS)
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Marlin, Benjamin; De Freitas, Nando
2012-01-01
Standard maximum likelihood estimation cannot be applied to discrete energy-based models in the general case because the computation of exact model probabilities is intractable. Recent research has seen the proposal of several new estimators designed specifically to overcome this intractability, but virtually nothing is known about their theoretical properties. In this paper, we present a generalized estimator that unifies many of the classical and recently proposed estimators. We use results...
Efficient estimation of the robustness region of biological models with oscillatory behavior.
Directory of Open Access Journals (Sweden)
Mochamad Apri
Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.
Estimating Forward Pricing Function: How Efficient is Indian Stock Index Futures Market?
Prasad Bhattacharaya; Harminder Singh
2006-01-01
This paper uses Indian stock futures data to explore unbiased expectations and efficient market hypothesis. Having experienced voluminous transactions within a short time span after its establishment, the Indian stock futures market provides an unparalleled case for exploring these issues involving expectation and efficiency. Besides analyzing market efficiency between cash and futures prices using cointegration and error correction frameworks, the efficiency hypothesis is also investigated a...
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
Directory of Open Access Journals (Sweden)
Roliana Ibrahim
2012-09-01
Full Text Available Development effort is an undeniable part of the project management which considerably influences the success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project. Due to the special specifications, accurate estimation of effort in the software projects is a vital management activity that must be carefully done to avoid from the unforeseen results. However numerouseffort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and the attempts continue to improve the performance of estimation methods. Prior researches conducted in this area have focused on numerical and quantitative approaches and there are a few research works that investigate the root problems and issues behind the inaccurate effort estimation of software development effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in terms of effort estimation. The proposed framework includes various indicators which cover the critical issues in field of software development effort estimation. Since the capabilities and shortages of organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic approach in which the strengths and weaknesses of organizations in field of effort estimation are discovered
Horton, G.E.; Dubreuil, T.L.; Letcher, B.H.
2007-01-01
Our goal was to understand movement and its interaction with survival for populations of stream salmonids at long-term study sites in the northeastern United States by employing passive integrated transponder (PIT) tags and associated technology. Although our PIT tag antenna arrays spanned the stream channel (at most flows) and were continuously operated, we are aware that aspects of fish behavior, environmental characteristics, and electronic limitations influenced our ability to detect 100% of the emigration from our stream site. Therefore, we required antenna efficiency estimates to adjust observed emigration rates. We obtained such estimates by testing a full-scale physical model of our PIT tag antenna array in a laboratory setting. From the physical model, we developed a statistical model that we used to predict efficiency in the field. The factors most important for predicting efficiency were external radio frequency signal and tag type. For most sampling intervals, there was concordance between the predicted and observed efficiencies, which allowed us to estimate the true emigration rate for our field populations of tagged salmonids. One caveat is that the model's utility may depend on its ability to characterize external radio frequency signals accurately. Another important consideration is the trade-off between the volume of data necessary to model efficiency accurately and the difficulty of storing and manipulating large amounts of data.
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson...
Institute of Scientific and Technical Information of China (English)
FAN JianQing; ZHOU Yong; CAI JianWen; CHEN Min
2009-01-01
Multivariate failure time data arise frequently in survival analysis. A commonly used technique is the working independence estimator for marginal hazard models. Two natural questions are how to improve the efficiency of the working independence estimator and how to identify the situations under which such an estimator has high statistical efficiency. In this paper, three weighted estimators are proposed based on three different optimal criteria in terms of the asymptotic covariance of weighted estimators. Simplified close-form solutions are found, which always outperform the working independence estimator. We also prove that the working independence estimator has high statistical efficiency,when asymptotic covariance of derivatives of partial log-likelihood functions is nearly exchangeable or diagonal. Simulations are conducted to compare the performance of the weighted estimator and working independence estimator. A data set from Busselton population health surveys is analyzed using the proposed estimators.
International Nuclear Information System (INIS)
The effective delayed neutron fraction, βeff, and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the adjoint flux calculated in the MC forward simulations have been developed and successfully applied for reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation method in which the pedigree of a single history is utilized by applying the MC Wielandt method. The algorithm of the new method is derived and its effectiveness is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and critical facilities. (author)
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Efficient focusing scheme for transverse velocity estimation using cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
The blood velocity can be estimated by cross-correlation of received RE signals, but only the velocity component along the beam direction is found. A previous paper showed that the complete velocity vector can be estimated, if received signals are focused along lines parallel to the direction of...... the flow. Here a weakly focused transmit field was used along with a simple delay-sum beamformer. A modified method for performing the focusing by employing a special calculation of the delays is introduced, so that a focused emission can be used. The velocity estimation was studied through extensive...
Updated estimation of energy efficiencies of U.S. petroleum refineries.
Energy Technology Data Exchange (ETDEWEB)
Palou-Rivera, I.; Wang, M. Q. (Energy Systems)
2010-12-08
Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels such as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.
Using Data Envelopment Analysis approach to estimate the health production efficiencies in China
Institute of Scientific and Technical Information of China (English)
ZHANG Ning; HU Angang; ZHENG Jinghai
2007-01-01
By using Data Envelopment Analysis approach,we treat the health production system in a certain province as a Decision Making Unit (DMU),identify its inputs and outputs,evaluate its technical efficiency in 1982,1990 and 2000 respectively,and further analyze the relationship between efficiency scores and social-environmental variables.This paper has found several interesting findings.Firstly,provinces on frontier in different year are different,but provinces far from the frontier keep unchanged.The average efficiency of health production has made a significant progress from 1982 to 2000.Secondly,all provinces in China can be divided into six categories in terms of health production outcome and efficiency,and each category has specific approach of improving health production efficiency.Thirdly,significant differences in health production efficiencies have been found among the eastern,middle and western regions in China,and among the eastern and middle regions.At last,there is significant positive relationship between population density and health production efficiency but negative relationship (not very significant) between the proportions of public health expenditure in total expense and efficiency.Maybe it is the result of inappropriate tendency of public expenditure.The relationship between abilities to pay for health care services and efficiency in urban areas is opposite to that in rural areas.One possible reason is the totally different income and public services treatments between rural and urban residents.Therefore,it is necessary to adjust health policies and service provisions which are specifically designed to different population groups.
The use of 32P and 15N to estimate fertilizer efficiency in oil palm
International Nuclear Information System (INIS)
Improving efficiency of use of fertilizers has attracted a great deal of interest on oil-palm estates because of increasing input costs. It is assumed that higher efficiency of use of fertilizers for estate crops, including oil palm, would result in significant savings and less environmental pollution. One way to enhance efficiency of use of fertilizers by oil palm is to apply them where the most active roots are located. Previous work has indicated the possibility of determining the most active roots of tea and chinchona by using 32P. In this experiment, 32P was again used, to determine the locations of the most active roots of oil palm trees
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
Li, Ping
2008-01-01
Compressed Counting (CC) was recently proposed for approximating the $\\alpha$th frequency moments of data streams, for $0 1$. Previous studies used the standard algorithm based on {\\em symmetric stable random projections} to approximate the $\\alpha$th frequency moments and the entropy. Based on maximally-skewed stable random projections, Compressed Counting (CC) dramatically improves symmetric stable random projections, especially when $\\alpha\\approx 1$. This study applies CC to estimate the R\\'enyi entropy, the Tsallis entropy, and the Shannon entropy. Our experiments on some Web crawl data demonstrate significant improvements over previous studies. When estimating the frequency moments, the R\\'enyi entropy, and the Tsallis entropy, the improvements of CC, in terms of accuracy, appear to approach "infinity" as $\\alpha\\to1$. When estimating Shannon entropy using R\\'enyi entropy or Tsallis entropy, the improvements of CC, in terms of accuracy, are roughly 20- to 50-fold. When estimating the Shannon entropy fro...
Ryabova, A. V.; Stratonnikov, Aleksandr A.; Loshchenov, V. B.
2006-06-01
A fast and highly informative method is presented for estimating the photodynamic activity of photosensitisers. The method makes it possible to determine the rate of photodegradation in erythrocyte-containing biological media in nearly in vivo conditions, estimate the degree of irreversible binding of oxygen dissolved in the medium during laser irradiation in the presence of photosensitisers, and determine the nature of degradation of photosensitisers exposed to light (photobleaching).
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Robert Aidoo; Fred Nimoh; John-Eudes Andivi Bakang; Kwasi Ohene-Yankyera; Simon Cudjoe Fialor; James Osei Mensah; Robert Clement Abaidoo
2012-01-01
The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain. A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders) in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta) through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency a...
Hutcheson, J P; Johnson, D E; Gerken, C L; Morgan, J B; Tatum, J D
1997-10-01
Six sets of four genetically identical Brangus steers (n = 24; X BW 409 kg) were used to determine the effect of different anabolic implants on visceral organ mass, chemical body composition, estimated tissue deposition, and energetic efficiency. Steers within a clone set were randomly assigned to one of the following implant treatments: C, no implant; E, estrogenic; A, androgenic, or AE, androgenic + estrogenic. Steers were slaughtered 112 d after implanting; visceral organs were weighed and final body composition determined by mechanical grinding and chemical analysis of the empty body. Mass of the empty gastrointestinal tract (GIT) was reduced approximately 9% (P .10) the efficiency of ME utilization. In general, estrogenic implants decreased GIT, androgenic implants increased liver, and all implants increased hide mass. Steers implanted with an AE combination had additive effects on protein deposition compared with either implant alone. The NEg requirements for body gain are estimated to be reduced 19% by estrogenic or combination implants. PMID:9331863
Directory of Open Access Journals (Sweden)
Dina Miftahutdinova
2015-02-01
Full Text Available Purpose: to give the estimation of efficiency of the use of the authorial training program in setup time for the women’s Ukraine rowing team representatives in the process of preparation to Olympic Games in London. Materials and Methods: 10 sportswomen of higher qualification, that are included to Ukraine rowing team, are participated in research. For the estimation of general and special physical preparedness the standard test and rowing ergometre Concept-2 are used. Results: the end of the preparatory period was observed significant improvement significant general and special physical fitness athletes surveyed, and their deviation from the model performance dropped to 5–7%. Conclusions: the high efficiency of the author training program for sportswomen of Ukrainian rowing team are testified and they became the Olympic champions in London.
Chaves, A S; Nascimento, M L; Tullio, R R; Rosa, A N; Alencar, M M; Lanna, D P
2015-10-01
The objective of this study was to examine the relationship of efficiency indices with performance, heart rate, oxygen consumption, blood parameters, and estimated heat production (EHP) in Nellore steers. Eighteen steers were individually lot-fed diets of 2.7 Mcal ME/kg DM for 84 d. Estimated heat production was determined using oxygen pulse (OP) methodology, in which heart rate (HR) was monitored for 4 consecutive days. Oxygen pulse was obtained by simultaneously measuring HR and oxygen consumption during a 10- to 15-min period. Efficiency traits studied were feed efficiency (G:F) and residual feed intake (RFI) obtained by regression of DMI in relation to ADG and midtest metabolic BW (RFI). Alternatively, RFI was also obtained based on equations reported by the NRC's to estimate individual requirement and DMI (RFI calculated by the NRC [1996] equation [RFI]). The slope of the regression equation and its significance was used to evaluate the effect of efficiency indices (RFI, RFI, or G:F) on the traits studied. A mixed model was used considering RFI, RFI, or G:F and pen type as fixed effects and initial age as a covariate. For HR and EHP variables, day was included as a random effect. There was no relationship between efficiency indices and back fat depth measured by ultrasound or daily HR and EHP ( > 0.05). Because G:F is obtained in relation to BW, the slope of G:F was positive and significant ( RFI and RFI ( RFI. Oxygen consumption per beat was not related to G:F; however, it was lower for RFI- and RFI-efficient steers, and consequently, oxygen volume (mL·min·kg) and OP (μL O·beat·kg) were also lower ( RFI and RFI ( > 0.05); however, G:F-efficient steers showed lower hematocrit and hemoglobin concentrations ( < 0.05). Differences in EHP between efficient and inefficient animals were not directly detected. Nevertheless, differences in oxygen consumption and OP were detected, indicating that the OP methodology may be useful to predict growth efficiency. PMID
Valid and efficient manual estimates of intracranial volume from magnetic resonance images
International Nuclear Information System (INIS)
Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T1-weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five
D-optimal and D-efficient equivalent-estimation second-order split-plot designs
2010-01-01
Industrial experiments often involve factors which are hard to change or costly to manipulate and thus make it impossible to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical agricultural...
Barr, J. G.; Engel, V.; Fuentes, J. D.; Fuller, D. O.; Kwon, H.
2012-11-01
Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC) systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE) and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
MACHARIA, Harrison; Goos, Peter
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical ...
Directory of Open Access Journals (Sweden)
Bulkina N.V.
2011-03-01
Full Text Available As a result of the conducted research results of an estimation of efficiency of application of the general eradication are presented therapy and local therapy at sick of inflammatory periodontal diseases against a chronic gastritis. Authors notice a positive effect at application as pathogenetic therapy of balm for gums of "Asepta" that normalisation of level of hygiene of the oral cavity, proof remission of periodontal diseases against a pathology of a gastroenteric path allows to achieve
Directory of Open Access Journals (Sweden)
D. O. Fuller
2012-11-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
Directory of Open Access Journals (Sweden)
Pin-Chih Wang
2014-09-01
Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.
Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation
Directory of Open Access Journals (Sweden)
M. Agatonović
2012-12-01
Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.
International Nuclear Information System (INIS)
Collection of hot electrons generated by the efficient absorption of light in metallic nanostructures, in contact with semiconductor substrates can provide a basis for the construction of solar energy-conversion devices. Herein, we evaluate theoretically the energy-conversion efficiency of systems that rely on internal photoemission processes at metal-semiconductor Schottky-barrier diodes. In this theory, the current-voltage characteristics are given by the internal photoemission yield as well as by the thermionic dark current over a varied-energy barrier height. The Fowler model, in all cases, predicts solar energy-conversion efficiencies of <1% for such systems. However, relaxation of the assumptions regarding constraints on the escape cone and momentum conservation at the interface yields solar energy-conversion efficiencies as high as 1%–10%, under some assumed (albeit optimistic) operating conditions. Under these conditions, the energy-conversion efficiency is mainly limited by the thermionic dark current, the distribution of hot electron energies, and hot-electron momentum considerations
Energy Technology Data Exchange (ETDEWEB)
Leenheer, Andrew J.; Narang, Prineha; Atwater, Harry A., E-mail: haa@caltech.edu [Thomas J. Watson Laboratories of Applied Physics, California Institute of Technology, Pasadena, California 91125 (United States); Joint Center for Artificial Photosynthesis, Pasadena, California 91125 (United States); Lewis, Nathan S. [Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125 (United States); Joint Center for Artificial Photosynthesis, Pasadena, California 91125 (United States)
2014-04-07
Collection of hot electrons generated by the efficient absorption of light in metallic nanostructures, in contact with semiconductor substrates can provide a basis for the construction of solar energy-conversion devices. Herein, we evaluate theoretically the energy-conversion efficiency of systems that rely on internal photoemission processes at metal-semiconductor Schottky-barrier diodes. In this theory, the current-voltage characteristics are given by the internal photoemission yield as well as by the thermionic dark current over a varied-energy barrier height. The Fowler model, in all cases, predicts solar energy-conversion efficiencies of <1% for such systems. However, relaxation of the assumptions regarding constraints on the escape cone and momentum conservation at the interface yields solar energy-conversion efficiencies as high as 1%–10%, under some assumed (albeit optimistic) operating conditions. Under these conditions, the energy-conversion efficiency is mainly limited by the thermionic dark current, the distribution of hot electron energies, and hot-electron momentum considerations.
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
International Nuclear Information System (INIS)
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Energy Technology Data Exchange (ETDEWEB)
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.
2010-04-14
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy
Institute of Scientific and Technical Information of China (English)
QU Annie; XUE Lan
2009-01-01
@@ In the analysis of correlated data, it is ideal to capture the true dependence structure to increase efficiency of the estimation. However, for multivariate survival data, this is extremely challenge since the martingale residual is involved and often intractable. Fan et al. have made a significant contribution by giving a close-form formula for the optimal weights of the estimating functions such that the asymptotic variance of the estimator is minimized. Since minimizing the variance matrix is not an easy task, several strategies are proposed, such as minimizing the total variance.The most feasible one is to use the diagonal matrix entries as the weighting scheme. We congratulate them on this important work. In the following we discuss implementing of their method and relate our work to theirs.
Binary logistic regression to estimate household income efficiency. (south Darfur rural areas-Sudan
Directory of Open Access Journals (Sweden)
Sofian A. A. Saad
2016-03-01
Full Text Available The main objective behind this study is to find out the main factors that affects the efficiency of household income in Darfur rejoin. The statistical technique of the binary logistic regression has been used to test if there is a significant effect of fife binary explanatory variables against the response variable (income efficiency; sample of size 136 household head is gathered from the relevant population. The outcomes of the study showed that; there is a significant effect of the level of household expenditure on the efficiency of income, beside the size of household also has significant effect on the response variable, the remaining explanatory variables showed no significant effects, those are (household head education level, size of household head own agricultural and numbers of students at school.
Case study in higher school: problems of application and efficiency estimation
Directory of Open Access Journals (Sweden)
Ekimova V.I.
2015-03-01
Full Text Available Case study takes leading positions in training specialists at higher schools in the majority of foreign countries and is regarded as the most efficient way of teaching students how to solve typical professional tasks. The article reviews the general principles and approaches to organization of case study sessions for students. The most interesting and informative resources as well as the most perspective formats of case studies are presented in the article. The review compiles the findings concerning the educational efficiency and developmental potential of this method. The article outlines the perspective of extending the areas of case study application in higher schools' educational process.
Farr, Benjamin; Luijten, Erik
2013-01-01
We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
Directory of Open Access Journals (Sweden)
Northcutt Sally L
2010-04-01
Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Directory of Open Access Journals (Sweden)
Toly Chen
2014-08-01
Full Text Available Cycle time management plays an important role in improving the performance of a wafer fabrication factory. It starts from the estimation of the cycle time of each job in the wafer fabrication factory. Although this topic has been widely investigated, several issues still need to be addressed, such as how to classify jobs suitable for the same estimation mechanism into the same group. In contrast, in most existing methods, jobs are classified according to their attributes. However, the differences between the attributes of two jobs may not be reflected on their cycle times. The bi-objective nature of classification and regression tree (CART makes it especially suitable for tackling this problem. However, in CART, the cycle times of jobs of a branch are estimated with the same value, which is far from accurate. For these reason, this study proposes a joint use of principal component analysis (PCA, CART, and back propagation network (BPN, in which PCA is applied to construct a series of linear combinations of original variables to form new variables that are as unrelated to each other as possible. According to the new variables, jobs are classified using CART before estimating their cycle times with BPNs. A real case was used to evaluate the effectiveness of the proposed methodology. The experimental results supported the superiority of the proposed methodology over some existing methods. In addition, the managerial implications of the proposed methodology are also discussed with an example.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its
An estimation of the Ukrainian Polissya efficiency for the radioactive level in plant products
International Nuclear Information System (INIS)
It was shown that addition to a soil of the sapropel in various doses reduces to a grater extent radiocaesium transfer to the yield of beet root and lupine-by 4 and 5 times respectively in comparison with the control. The sapropel has positive aftereffect. The efficiency of the sapropel is higher on soils depleted in mineral nutritious elements and humus
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
DEFF Research Database (Denmark)
Madsen, U.; Aubertin, G.; Breum, N. O.; Fontaine, J. R.; Nielsen, Peter V.
Numerical modelling of direct capture efficiency of a local exhaust is used to compare the tracer gas technique of a proposed CEN standard against a more consistent approach based on an imaginary control box. It is concluded that the tracer gas technique is useful for field applications....
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
Energy Technology Data Exchange (ETDEWEB)
Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
International Nuclear Information System (INIS)
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin
2013-01-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both, stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretiza- tion of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both, maximum likelihood methods and an improved Monte Carlo...
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
Park, Timothy A.; Loomis, John B.
1992-01-01
This paper empirically tested the three conditions identified by McConnell for equivalence of the linear utility difference model and the valuation function approach to dichotomous choice contingent valuation. Using a contingent valuation survey for deer hunting in California, two of the three conditions were violated. Even though the models are not simple linear transforms of each other for this survey, estimates of mean willingness to pay and their associated 95% confidence intervals around...
Efficient Quantum State Estimation by Continuous Weak Measurement and Dynamical Control
Smith, Greg A; Silberfarb, Andrew; Deutsch, Ivan H.; Jessen, Poul S.
2006-01-01
We demonstrate a fast, robust and non-destructive protocol for quantum state estimation based on continuous weak measurement in the presence of a controlled dynamical evolution. Our experiment uses optically probed atomic spins as a testbed, and successfully reconstructs a range of trial states with fidelities of ~90%. The procedure holds promise as a practical diagnostic tool for the study of complex quantum dynamics, the testing of quantum hardware, and as a starting point for new types of ...
Kovtun, Yu V; Skibenko, A I; Yuferov, V B
2012-01-01
The processes of injection of a sputtered-and-ionized working material into the pulsed reflex discharge plasma have been considered at the initial stage of dense gas-metal plasma formation. A calculation model has been proposed to estimate the parameters of the sputtering mechanism for the required working material to be injected into the discharge. The data obtained are in good accordance with experimental results.
Popkov V.M.; Fomkin R.N.; Blyumberg B.I.
2013-01-01
Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF). Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by ...
Egizio, Victoria B.; Eddy, Michael; Robinson, Matthew; Jennings, J. Richard
2010-01-01
Researchers are interested in respiratory sinus arrhythmia (RSA) as an index of cardiac vagal activity. Yet, debate exists about how to account for respiratory influences on quantitative indices of RSA. Ritz and colleagues (2001) developed a within-individual correction procedure by which the effects of respiration on RSA may be estimated using regression models. We replicated their procedure substituting a spectral high-frequency measure of RSA for a time-domain statistic and a respiratory b...
Efficient parameter estimation in 2D transport models based on an adjoint formalism
International Nuclear Information System (INIS)
An adjoint based optimization procedure is elaborated to estimate transport coefficients for plasma edge models based on a limited set of known profiles at different locations. It is shown that a set of adjoint equations can accurately determine all sensitivities towards transport coefficients at once. A proof of principle is provided on a simple geometry. The methodology is subsequently applied to assess whether a simple edge model can be tuned toward full B2-EIRENE profiles for a JET-configuration. (paper)
The efficiency of different estimation methods of hydro-physical limits
Emma María Martínez; Tomas Serafín Cuesta; Javier José Cancela
2012-01-01
The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP) and field capacity (FC), is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitabi...
An Efficient Deconvolution Algorithm for Estimating Oxygen Consumption During Muscle Activities
Dash, Ranjan K.; Somersalo, Erkki; Cabrera, Marco E; Calvetti, Daniela
2007-01-01
The reconstruction of an unknown input function from noisy measurements in a biological system is an ill-posed inverse problem. Any computational algorithm for its solution must use some kind of regularization technique to neutralize the disastrous effects of amplified noise components on the computed solution. In this paper, following a hierarchical Bayesian statistical inversion approach, we seek estimates for the input function and regularization parameter (hyperparameter) that maximize th...
Tamošiūnas, M.; Jakovels, D.; Lihačovs, A.; Kilikevičius, A.; Baltušnikas, J.; Kadikis, R.; Šatkauskas, S.
2014-10-01
Electroporation and ultrasound induced sonoporation has been showed to induce plasmid DNA transfection to the mice tibialis cranialis muscle. It offers new prospects for gene therapy and cancer treatment. However, numerous experimental data are still needed to deliver the plausible explanation of the mechanisms governing DNA electro- or sono-transfection, as well as to provide the updates on transfection protocols for transfection efficiency increase. In this study we aimed to apply non-invasive optical diagnostic methods for the real time evaluation of GFP transfection levels at the reduced costs for experimental apparatus and animal consumption. Our experimental set-up allowed monitoring of GFP levels in live mice tibialis cranialis muscle and provided the parameters for DNA transfection efficiency determination.
Estimation of Power Efficiency of Combined Heat Pumping Stations in Heat Power Supply Systems
I. I. Matsko
2010-01-01
The paper considers realization of heat pumping technologies advantages at heat power generation for heat supply needs on the basis of combining electric drive heat pumping units with water heating boilers as a part of a combined heat pumping station.The possibility to save non-renewable energy resources due to the combined heat pumping stations utilization instead of water heating boiler houses is shown in the paper.The calculation methodology for power efficiency for introduction of combine...
McGuire, Kimberly; de Croon, Guido; de Wagter, Christophe; Remes, Bart; Tuyls, Karl; Kappen, Hilbert
2016-01-01
Autonomous flight of pocket drones is challenging due to the severe limitations on on-board energy, sensing, and processing power. However, tiny drones have great potential as their small size allows maneuvering through narrow spaces while their small weight provides significant safety advantages. This paper presents a computationally efficient algorithm for determining optical flow, which can be run on an STM32F4 microprocessor (168 MHz) of a 4 gram stereo-camera. The optical flow algorithm ...
SiC modular multilevel converters: sub-module voltage ripple analysis and efficiency estimations
Perez Basante, Angel; Pou Félix, Josep; Ceballos Recio, Salvador; Gil De Muro, Asier; Pujana, Ainhoa; Ibañez, Pedro
2014-01-01
Two important technical challenges associated with the Modular Multilevel Converter (MMC) are the reduction of the voltage ripple of the Sub-Module (SM) capacitors and the reduction of the converter losses. This paper conducts a study focused on these two topics. Firstly, the effect of a circulating current with a predefined second harmonic on the SM voltage ripple is assessed. Secondly, an efficiency study for two different MMCs, one with silicon (Si) and the other with silicon carbide (SiC)...
A. Rosati; DeJong, T M
2003-01-01
It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, ‘daily’ photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthes...
Efficient estimation in the bivariate normal copula model: normal margins are least favourable
Klaassen, Chris A. J.; Wellner, Jon A.
1997-01-01
Consider semi-parametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter θ of interest and in which the two unknown marginal distributios are the (infinite-dimensional) nuisance parameters. The efficient score for θ can be characterized in terms of the solutions of two coupled Sturm-Liouville equations. Where the family of copula functions corresponds to the normal distributios with mean 0, variance 1 and correlation θ, the solution of ...
Simoncini, David; Zhang, Kam Y. J.
2013-01-01
Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex e...
Massive MIMO Systems With Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits
Bjornson, Emil; Hoydis, Jakob; Kountouris, Marios; Debbah, Merouane
2014-01-01
The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little inter-user interference. Since these results rely on asymptotics, it is...
International Nuclear Information System (INIS)
This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications
Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.
2016-09-01
Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.
Kato, M.; Hachisu, I.
1999-01-01
We have calculated the mass accumulation efficiency during helium shell flashes to examine whether or not a carbon-oxygen white dwarf (C+O WD) grows up to the Chandrasekhar mass limit to ignite a Type Ia supernova explosion. It has been frequently argued that luminous super-soft X-ray sources and symbiotic stars are progenitors of SNe Ia. In such systems, a C+O WD accretes hydrogen-rich matter from a companion and burns hydrogen steadily on its surface. The WD develops a helium layer undernea...
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin; Noé, Frank
2013-04-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretization of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both maximum likelihood methods and an improved Monte Carlo sampling method for reversible transition matrices with fixed stationary distribution are given. The sampling approach is applied to a toy example as well as to simulations of the MR121-GSGS-W peptide, and is demonstrated to converge much more rapidly than a previous approach of Noé [J. Chem. Phys. 128, 244103 (2008), 10.1063/1.2916718].
An efficient Bandwidth Demand Estimation for Delay Reduction in IEEE 802.16j MMR WiMAX Networks
Directory of Open Access Journals (Sweden)
Fath Elrahman Ismael
2010-01-01
Full Text Available IEEE 802.16j MMR WiMAX networks allow the number of hops between the user andthe MMR-BS to be more than two hops. The standard bandwidth request procedure inWiMAX network introduces much delay to the user data and acknowledgement of theTCP packet that affects the performance and throughput of the network. In this paper,we propose a new scheduling scheme to reduce the bandwidth request delay in MMRnetworks. In this scheme, the MMR-BS allocates bandwidth to its direct subordinate RSswithout bandwidth request using Grey prediction algorithm to estimate the requiredbandwidth of each of its subordinate RS. Using this architecture, the access RS canallocate its subordinate MSs the required bandwidth without notification to the MMR-BS.Our scheduling architecture with efficient bandwidth demand estimation able to reducedelay significantly.
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert
Directory of Open Access Journals (Sweden)
Korniyenko S.V.
2011-12-01
Full Text Available One of priority directions in modern building is maintenance of energy efficiency of buildings and constructions. This problem can be realized by perfection of architectural, constructive and technical decisions. The particular interest is represented by an influence estimation of temperature and moisture mode of enclosing structures on a thermal performance and energy efficiency of buildings. The analysis of the data available in the literature has shown absence of effective calculation methods of temperature and moisture mode in edge zones of enclosing structures that complicates the decision of this problem.The purpose of the given work is an estimation of edge zones influence on a thermal performance and energy efficiency of buildings. The design procedure of energy parameters of a building for the heating period, realized in the computer program is developed. The given technique allows settling an invoice power inputs on heating, hot water supply, an electrical supply. Power inputs on heating include conduction heat-losses through an envelope of a building taking into account edge zones, ventilation heat-losses and leakage air (infiltration, internal household thermal emissions, heat-receipt from solar radiation. On an example it is shown that the account of edge zones raises conduction heat-losses through an envelope of a building on 37 %, the expense of thermal energy on building heating on 32 %, and the expense thermal and electric energy on 13 %. Consequently, thermal and moisture mode in edge zones of enclosing structures makes essential impact on building power consumption. Perfection of the constructive decision leads to decrease of transmission heat-losses through an envelope of a building on 29 %, the expense of thermal energy on building heating on 25 %, the expense of thermal and electric energy on 10 %. Thus, perfection of edge zones of enclosing structures has high potential of energy efficiency.
Estimation of spray system efficiency in case of loss in coolant severe accident condition
International Nuclear Information System (INIS)
The results of pressurize surge line double ended break accident analysis in case of failure of ECCS at Armenian NPP are presented. Based on the analysis results the assessment of spray system efficiency on decreasing confinement pressure and amount radioactive material is carried out. Hydrogen behavior in confinement is analyzed. The occurrence of conditions for possible hydrogen burning in the confinement is assessed as well. Likelihood of accident is in the range of 10-7. However for accident analysis purposes of such kind of accidents needs to be taken into account. The analysis shows that the main contributor in release decrease is spray system availability factor. Unavailability of spray system could lead to the increase of radioactive release by factor 8
Estimation of Power Efficiency of Combined Heat Pumping Stations in Heat Power Supply Systems
Directory of Open Access Journals (Sweden)
I. I. Matsko
2014-07-01
Full Text Available The paper considers realization of heat pumping technologies advantages at heat power generation for heat supply needs on the basis of combining electric drive heat pumping units with water heating boilers as a part of a combined heat pumping station.The possibility to save non-renewable energy resources due to the combined heat pumping stations utilization instead of water heating boiler houses is shown in the paper.The calculation methodology for power efficiency for introduction of combined heat pumping stations has been developed. The seasonal heat needs depending on heating system temperature schedule, a low potential heat source temperature and regional weather parameters are taken into account in the calculations.
An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation
Directory of Open Access Journals (Sweden)
Mingwei Shen
2014-09-01
Full Text Available In this paper, an efficient direct data domain space-time adaptive processing (STAP algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results.
Directory of Open Access Journals (Sweden)
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Institute of Scientific and Technical Information of China (English)
F.Y.Wu; Y.H.Zhou; F.Tong; R.Kastner
2013-01-01
Underwater acoustic channels are recognized for being one of the most difficult propagation media due to considerable difficulties such as:multipath,ambient noise,time-frequency selective fading.The exploitation of sparsity contained in underwater acoustic channels provides a potential solution to improve the performance of underwater acoustic channel estimation.Compared with the classic l0 and l1 norm constraint LMS algorithms,the p-norm-like (lP) constraint LMS algorithm proposed in our previous investigation exhibits better sparsity exploitation performance at the presence of channel variations,as it enables the adaptability to the sparseness by tuning of p parameter.However,the decimal exponential calculation associated with the p-norm-like constraint LMS algorithm poses considerable limitations in practical application.In this paper,a simplified variant of the p-norm-like constraint LMS was proposed with the employment of Newton iteration method to approximate the decimal exponential calculation.Numerical simulations and the experimental results obtained in physical shallow water channels demonstrate the effectiveness of the proposed method compared to traditional norm constraint LMS algorithms.
A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency
Energy Technology Data Exchange (ETDEWEB)
Lehr, W. [National Oceanic and Atmospheric Administration, Seattle, WA (United States)
2001-07-01
Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig.
A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency
International Nuclear Information System (INIS)
Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Directory of Open Access Journals (Sweden)
Jeyarajan Thiyagalingam
2011-01-01
Full Text Available Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results.
Efficient architecture for global elimination algorithm for H.264 motion estimation
Indian Academy of Sciences (India)
P Muralidhar; C B Ramarao
2016-01-01
This paper presents a fast block matching motion esti mation algorithm and its architecture. The proposed architecture is based on Global Elimination (GE) Algorithm, which uses pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. GE uses a preprocessing stage which can skip unnecessary Sum Absolute Difference (SAD) calculations by comparing minimum SAD with sub-sampled SAD (SSAD). In the second stage SAD is computed at roughly matched candidate positions. GE algorithm uses fixed sub-block sizes and shapes to compute SSAD values in preprocessing stage. Complexity of this GE algorithm is further reduced by adaptively changing the sub-block sizes depending on the macroblock features. In this paper adaptive Global Elimination algorithm has been implemented which reduces the computational complexity of motion estimation algorithm and thus resulted in low power dissipation. Proposed architecture achieved 60% less number of computations compared to existing full search architecture and 50% high throughput compared to existing fixed Global Elimination Architecture.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future. PMID:24691198
Directory of Open Access Journals (Sweden)
Popkov V.M.
2013-03-01
Full Text Available Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF. Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by the method of HIFU have been performed. The general group of patients (n=102 has been subdivided by the method of casual distribution into two samples: group of patients with absent recurrent tumor and group of patients with the revealed recurrent tumor, by morphological research of biopsy material of residual prostate tissue after HIFU. The computer program has been used to study the signs of outcome of patients with prostate cancer. Results: Risk of development of recurrent prostate cancer has grown with the PSA level raise and its density. The index of positive biopsy columns <0,2 has shown the recurrence of prostate cancer in 17% cases while occurrence of prostate cancer in 59% cases has been determined by the index of 0,5 and higher. The tendency to obvious growth of number of relapses has been revealed by the sum of Glison raise with present perineural invasion. Cases of recurrent prostate cancer have been predominant in patients with lymphovascular invasions. In conclusion it has been worked out that the main signs of recurrent prostate cancer development may include: PSA, PSA density, the sum of Glison, lymphovascular invasion, invasion.
International Nuclear Information System (INIS)
A method is proposed for estimating the potential efficiency which can be achieved in an initially unbalanced multijunction solar cell by the mutual convergence of photogenerated currents: to extract this current from a relatively narrow band-gap cell and to add it to a relatively wide-gap cell. It is already known that the properties facilitating relative convergence are inherent to such objects as bound excitons, quantum dots, donor-acceptor pairs, and others located in relatively wide-gap cells. In fact, the proposed method is reduced to the problem of obtaining such a required light current-voltage (I–V) characteristic which corresponds to the equality of all photogenerated short-circuit currents. Two methods for obtaining the required light I–V characteristic are used. The first one is selection of the spectral composition of the radiation incident on the multijunction solar cell from an illuminator. The second method is a double shift of the dark I–V characteristic: a current shift Jg (common set photogenerated current) and a voltage shift (−JgRs), where Rs is the series resistance. For the light and dark I–V characteristics, a general analytical expression is derived, which considers the effect of so-called luminescence coupling in multijunction solar cells. The experimental I–V characteristics are compared with the calculated ones for a three-junction InGaP/GaAs/Ge solar cell with Rs = 0.019 Ω cm2 and a maximum factual efficiency of 36.9%. Its maximum potential efficiency is estimated as 41.2%
Qiu, Bingwen; Feng, Min; Tang, Zhenghong
2016-05-01
This study proposed a simple Smoother without any local adjustments based on Continuous Wavelet Transform (SCWT). And then it evaluated its performance together with other commonly applied techniques in phenological estimation. These noise reduction methods included Savitzky-Golay filter (SG), Double Logistic function (DL), Asymmetric Gaussian function (AG), Whittaker Smoother (WS) and Harmonic Analysis of Time-Series (HANTS). They were evaluated based on fidelity and smoothness, and their efficiencies in deriving phenological parameters through the inflexion point-based method with the 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) in 2013 in China. The following conclusions were drawn: (1) The SG method exhibited strong fidelity, but weak smoothness and spatial continuity. (2) The HANTS method had very robust smoothness but weak fidelity. (3) The AG and DL methods performed weakly for vegetation with more than one growth cycle (i.e., multiple crops). (4) The WS and SCWT smoothers outperformed others with combined considerations of fidelity and smoothness, and consistent phenological patterns (correlation coefficients greater than 0.8 except evergreen broadleaf forests (0.68)). (5) Compared with WS methods, the SCWT smoother was capable in preservation of real local minima and maxima with fewer inflexions. (6) Large discrepancy was examined from the estimated phenological dates with SG and HANTS methods, particularly in evergreen forests and multiple cropping regions (the absolute mean deviation rates were 6.2-17.5 days and correlation coefficients less than 0.34 for estimated start dates).
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
International Nuclear Information System (INIS)
We estimate the environmental efficiency, reduction potential and marginal abatement cost of carbon dioxide (CO2) emissions from coal-fired power plants in China using a novel plant-level dataset derived from the first and second waves of the National Economic Survey, which were implemented in 2004 and 2008, respectively. The results indicate that there are large opportunities for CO2 emissions reduction in China's coal-fired power plants. Given that all power plants operate fully efficiently, China's CO2 emissions in 2004 and 2008 could have been reduced by 52% and 70%, respectively, accompanied by an expansion in electricity output. In other words, the opportunities for ‘double dividend’ exist. In 2004, the average marginal abatement cost of CO2 emissions for China's power plants was approximately 955 Yuan/ton, whereas in 2008, the cost increased to 1142 Yuan/ton. The empirical analyses show that subsidies from the government can reduce environmental inefficiency, but the subsidies significantly increase the shadow price of the power plants. Older and larger power plants have a lower environmental efficiency and marginal CO2 abatement cost. The ratio of coal consumption negatively affects the environmental efficiencies of power plants. -- Highlights: •A novel plant-level dataset derived from the National Economic Survey in China is used. •There are large opportunities for CO2 emissions reduction in China's coal-fired power plants. •Subsidies can reduce environmental inefficiency but increase shadow price
International Nuclear Information System (INIS)
Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Bauer, Frank
2012-01-01
We introduce a new method to efficiently approximate the number of infections resulting from a given initially-infected node in a network of susceptible individuals, based on counting the number of possible infection paths of various lengths to each other node in the network. We analytically study the properties of our method systematically, in particular demonstrating different forms for SIS and SIR disease spreading (e.g. under the SIR model our method counts self-avoiding walks). In comparison to existing methods to infer the spreading efficiency of different nodes in the network (based on degree, k-shell decomposition analysis and different centrality measures), our method directly considers the spreading process, and as such is unique in providing estimation of actual numbers of infections. Crucially, in simulating infections on various real-world networks with the SIR model, we show that our walks-based method improves the inference of effectiveness of nodes over a wide range of infection rates compared...
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Nobuhiko Fuwa; Christopher Edmonds; Pabitra Banik
2005-01-01
We focus on the impact of failing to control for differences in land types defined along toposequence on estimates of farm technical efficiency for small-scale rice farms in eastern India. In contrast with the existing literature, we find that those farms may be considerably more technically efficient than they appear from more aggregated analysis without such control. Farms planted with modern rice varieties are technically efficient. Furthermore, farms planted with traditional rice varietie...
Fuwa, Nobuhiko; Edmonds, Christopher M.; Banik, Pabitra
2005-01-01
We focus on the impact of failing to control for differences in land types defined along toposequence on estimates of farm technical efficiency for small-scale rice farms in eastern India. In contrast with the existing literature, we find that those farms may be considerably more technically efficient than they appear from more aggregated analysis without such control. Farms planted with modern rice varieties are technically efficient. Furthermore, farms planted with traditional rice varietie...
Madenjian, Charles P.; Rediske, Richard R.; O'Keefe, James P.; David, Solomon R.
2014-01-01
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination. PMID:25226430
Directory of Open Access Journals (Sweden)
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Scartazza, Andrea; Vaccari, Francesco Primo; Bertolini, Teresa; Di Tommasi, Paul; Lauteri, Marco; Miglietta, Franco; Brugnoli, Enrico
2014-10-01
Water-use efficiency (WUE), thought to be a relevant trait for productivity and adaptation to water-limited environments, was estimated for three different ecosystems on the Mediterranean island of Pianosa: Mediterranean macchia (SMM), transition (S(TR)) and abandoned agricultural (SAA) ecosystems, representing a successional series. Three independent approaches were used to study WUE: eddy covariance measurements, C isotope composition of ecosystem respired CO2, and C isotope discrimination (Δ) of leaf material (dry matter and soluble sugars). Seasonal variations in C-water relations and energy fluxes, compared in S(MM) and in SAA, were primarily dependent on the specific composition of each plant community. WUE of gross primary productivity was higher in SMM than in SAA at the beginning of the dry season. Both structural and fast-turnover leaf material were, on average, more enriched in (13)C in S(MM) than SAA, indicating relatively higher stomatal control and WUE for the long-lived macchia species. This pattern corresponded to (13)C-enriched respired CO2 in SMM compared to the other ecosystems. Conversely, most of the annual herbaceous SAA species (terophytes) showed a drought-escaping strategy, with relatively high stomatal conductance and low WUE. An ecosystem-integrated Δ value was weighted for each ecosystem on the abundance of different life forms, classified according to Raunkiar's system. Agreement was found between ecosystem WUE calculated using eddy covariance and those estimated using integrated Δ approaches. Comparing the isotopic methods, Δ of leaf soluble sugars provided the most reliable proxy for short-term changes in photosynthetic discrimination and associated shifts in integrated canopy-level WUE along the successional series. PMID:25085444
Directory of Open Access Journals (Sweden)
Y. Tramblay
2011-01-01
Full Text Available A good knowledge of rainfall is essential for hydrological operational purposes such as flood forecasting. The objective of this paper was to analyze, on a relatively large sample of flood events, how rainfall-runoff modeling using an event-based model can be sensitive to the use of spatial rainfall compared to mean areal rainfall over the watershed. This comparison was based not only on the model's efficiency in reproducing the flood events but also through the estimation of the initial conditions by the model, using different rainfall inputs. The initial conditions of soil moisture are indeed a key factor for flood modeling in the Mediterranean region. In order to provide a soil moisture index that could be related to the initial condition of the model, the soil moisture output of the Safran-Isba-Modcou (SIM model developed by Météo-France was used. This study was done in the Gardon catchment (545 km^{2} in South France, using uniform or spatial rainfall data derived from rain gauge and radar for 16 flood events. The event-based model considered combines the SCS runoff production model and the Lag and Route routing model. Results show that spatial rainfall increases the efficiency of the model. The advantage of using spatial rainfall is marked for some of the largest flood events. In addition, the relationship between the model's initial condition and the external predictor of soil moisture provided by the SIM model is better when using spatial rainfall, in particular when using spatial radar data with R^{2} values increasing from 0.61 to 0.72.
International Nuclear Information System (INIS)
Two independent methods of estimating gross ecosystem production (GEP) were compared over a period of 2 years at monthly integrals for a mixed forest of conifers and deciduous hardwoods at Harvard Forest in central Massachusetts. Continuous eddy flux measurements of net ecosystem exchange (NEE) provided one estimate of GEP by taking day to night temperature differences into account to estimate autotrophic and heterotrophic respiration. GEP was also estimated with a quantum efficiency model based on measurements of maximum quantum efficiency (Qmax), seasonal variation in canopy phenology and chlorophyll content, incident PAR, and the constraints of freezing temperatures and vapour pressure deficits on stomatal conductance. Quantum efficiency model estimates of GEP and those derived from eddy flux measurements compared well at monthly integrals over two consecutive years (R2 = 0–98). Remotely sensed data were acquired seasonally with an ultralight aircraft to provide a means of scaling the leaf area and leaf pigmentation changes that affected the light absorption of photosynthetically active radiation to larger areas. A linear correlation between chlorophyll concentrations in the upper canopy leaves of four hardwood species and their quantum efficiencies (R2 = 0–99) suggested that seasonal changes in quantum efficiency for the entire canopy can be quantified with remotely sensed indices of chlorophyll. Analysis of video data collected from the ultralight aircraft indicated that the fraction of conifer cover varied from < 7% near the instrument tower to about 25% for a larger sized area. At 25% conifer cover, the quantum efficiency model predicted an increase in the estimate of annual GEP of < 5% because unfavourable environmental conditions limited conifer photosynthesis in much of the non-growing season when hardwoods lacked leaves
Directory of Open Access Journals (Sweden)
Raymond K. DZIWORNU
2014-11-01
Full Text Available This paper applied the stochastic profit frontier model to estimate economic efficiency of 199 small-scale commercial broiler producers in the Greater Accra Region of Ghana. Farm-level data was obtained from the producers through a multi-stage sampling technique. Results indicate that broiler producers are not fully economically efficient. The mean economic efficiency was 69 percent, implying that opportunity exist for broiler producers to increase their economic efficiency level through better use of available resources. Age of producer, extension contact, market age of broiler and credit access were found to significantly influence economic efficiency in broiler production. Policy measures directed at these factors to enhance economic efficiency of broiler producers are recommendable.
Ainong Li; Jinhu Bian; Guangbin Lei; Chengquan Huang
2012-01-01
Maximal light use efficiency (LUE) is an important ecological index of a vegetation essential attribute, and a key parameter of the LUE-based model for estimating large-scale vegetation productivity by remote sensing technology. However, although currently used in different models there still exists extensive controversy. This paper takes the Zoige Plateau in China as a case area to develop a new approach for estimating the maximal LUEs for different vegetation. Based on an existing land cove...
Institute of Scientific and Technical Information of China (English)
KUK Anthony
2009-01-01
@@ The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on working correlation matrices, optimal weighting, composite likelihood and various variants have been proposed in the categorical data literature, the working independence estimator is still very much the prevalent estimator in multivariate survival data analysis.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
Schnitzer, Mireille E; Lok, Judith J; Bosch, Ronald J
2016-01-01
In longitudinal data arising from observational or experimental studies, dependent subject drop-out is a common occurrence. If the goal is estimation of the parameters of a marginal complete-data model for the outcome, biased inference will result from fitting the model of interest with only uncensored subjects. For example, investigators are interested in estimating a prognostic model for clinical events in HIV-positive patients, under the counterfactual scenario in which everyone remained on ART (when in reality, only a subset had). Inverse probability of censoring weighting (IPCW) is a popular method that relies on correct estimation of the probability of censoring to produce consistent estimation, but is an inefficient estimator in its standard form. We introduce sequentially augmented regression (SAR), an adaptation of the Bang and Robins (2005. Doubly robust estimation in missing data and causal inference models. Biometrics 61, 962-972.) method to estimate a complete-data prediction model, adjusting for longitudinal missing at random censoring. In addition, we propose a closely related non-parametric approach using targeted maximum likelihood estimation (TMLE; van der Laan and Rubin, 2006. Targeted maximum likelihood learning. The International Journal of Biostatistics 2 (1), Article 11). We compare IPCW, SAR, and TMLE (implemented parametrically and with Super Learner) through simulation and the above-mentioned case study. PMID:26224070
Bengel, F M; Permanetter, B; Ungerer, M; Nekolla, S; Schwaiger, M
2000-03-01
The clearance kinetics of carbon-11 acetate, assessed by positron emission tomography (PET), can be combined with measurements of ventricular function for non-invasive estimation of myocardial oxygen consumption and efficiency. In the present study, this approach was applied to gain further insights into alterations in the failing heart by comparison with results obtained in normals. We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A "stroke work index" (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a "work-metabolic index" (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11C-acetate derived from monoexponential fitting. In DCM patients, left ventricular ejection fraction was 19%+/-10% and end-diastolic volume was 92+/-28 ml/m2 (vs 64%+/-7% and 55+/-8 ml/m2 in normals, PSWI (1674+/-761 vs 4736+/-895 mmHg x ml/m2; P<0.001) and the WMI as an estimate of efficiency (2.98+/-1.30 vs 6.20+/-2.25 x 10(6) mmHg x ml/m2; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart
International Nuclear Information System (INIS)
We present the neutron-detection efficiencies of a Gd-coated single-gap resistive plate chamber (RPC) and a LiF-coated double-gap RPC. The experiments were performed by using indirect neutrons provided by the MC50 cyclotron at the Korea Institute of Radiological and Medical Science. Both RPCs show a decrease in the efficiency with increasing beam current, especially the Gd-coated RPC at the highest beam current (50 nA) that we received. Such a decrease in the efficiency could be understood in terms of a decrease in the effective electric field in the gas gap under high-particlerate environment. The operational plateaus start at about 8 kV for the Gd-coated RPC and at about 6.7 kV for the LiF-coated RPC. The neutron-detection efficiencies of the Gd-coated and the LiF-coated RPCs are about 2.5 and 1.8 %, respectively, at the operational high-voltage-plateau region. These results are completely consistent with the previous efficiencies obtained by using an intense 252Cf source.
Hur, Jin; Lee, Tae-Hwan; Lee, Bo-Mi
2011-12-01
The spectroscopic characteristics and relative distribution of refractory dissolved organic matter (R-DOM) in sewage have been investigated using the influent and the effluent samples collected from 15 large-scale biological wastewater treatment plants (WWTPs). Correlation between the characteristics of the influent and the final removal efficiency was also examined. Enhancement of specific ultraviolet absorbance (SUVA) and a higher R-DOM distribution ratio were observed for the effluent DOM compared with the influent DOM. However, the use of conventional rather than advanced biological treatments did not appear to affect either the effluent DOM or the removal efficiency, and there was no statistical significant difference between the two. No consistent trend was observed in the changes in the synchronous fluorescence spectra of the DOM after biological treatment. Irrespective of the treatment option, the removal efficiency of DOM was greater when the influent DOM had a lower SUVA, reduced DOC-normalized humic substance-like fluorescence, and a lower R-DOM distribution. These results suggest that selected characteristics of the influent may provide an indication of DOM removal efficiency in WWTPs. For R-DOM removal efficiency, however, similar characteristics of the influent did not show a negative relationship, and even exhibited a slight positive correlation, suggesting that the presence of refractory organic carbon structures in the influent sewage may stimulate microbial activity and inhibit the production of R-DOM during biological treatment. PMID:22439572
Directory of Open Access Journals (Sweden)
Dawang Naanpoes Charles
2011-11-01
Full Text Available This study examines the Net Farm Income (NFI, profitability index and technical efficiency of artisanal fishing in five natural lakes in plateau state, central Nigeria, with a view to examine the level of exploitation of captured inland fisheries as a renewable resource in the country. Data were collected using questionnaire from 110 sample fishermen selected from Polmakat, Shimankar, Deben, Janta and Pandam lakes using multi-stage sampling technique and analysed using descriptive statistics, farm budgeting techniques (net farm income and stochastic frontier production function model. The study reveals a net farm income of ₦48, 734.57 and a profitability index of ₦7.67. The mean technical efficiency of 83% was obtained, indicating that the sample fishermen were relatively efficient in allocating their limited resources. The result of the analysis indicates that 72% of variation in the fishermen output was as a result of the presence of technical inefficiency effect in the fishery, showing a potential of about 17% chance for improvement in technical efficiency level. Some observable variables relating to socioeconomics characteristics such as extension contact, experience and educational status significantly explain variation in technical efficiency. Transformation for effective and sustainable fisheries exploitation will need the education of fishermen, extension education and redefinition of property rights.
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela;
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given r...
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
H. Macharia (Harrison); P.P. Goos (Peter)
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the
Gonthier, Gerard J.
2007-01-01
A graphical method that uses continuous water-level and barometric-pressure data was developed to estimate barometric efficiency. A plot of nearly continuous water level (on the y-axis), as a function of nearly continuous barometric pressure (on the x-axis), will plot as a line curved into a series of connected elliptical loops. Each loop represents a barometric-pressure fluctuation. The negative of the slope of the major axis of an elliptical loop will be the ratio of water-level change to barometric-pressure change, which is the sum of the barometric efficiency plus the error. The negative of the slope of the preferred orientation of many elliptical loops is an estimate of the barometric efficiency. The slope of the preferred orientation of many elliptical loops is approximately the median of the slopes of the major axes of the elliptical loops. If water-level change that is not caused by barometric-pressure change does not correlate with barometric-pressure change, the probability that the error will be greater than zero will be the same as the probability that it will be less than zero. As a result, the negative of the median of the slopes for many loops will be close to the barometric efficiency. The graphical method provided a rapid assessment of whether a well was affected by barometric-pressure change and also provided a rapid estimate of barometric efficiency. The graphical method was used to assess which wells at Air Force Plant 6, Marietta, Georgia, had water levels affected by barometric-pressure changes during a 2003 constant-discharge aquifer test. The graphical method was also used to estimate barometric efficiency. Barometric-efficiency estimates from the graphical method were compared to those of four other methods: average of ratios, median of ratios, Clark, and slope. The two methods (the graphical and median-of-ratios methods) that used the median values of water-level change divided by barometric-pressure change appeared to be most resistant to
International Nuclear Information System (INIS)
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%±10% and end-diastolic volume was 92±28 ml/m2 (vs 64%±7% and 55±8 ml/m2 in normals, P2; P6 mmHg x ml/m2; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
Latypov, A. F.
2008-12-01
Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.
Cardot, Hervé; Zitt, Pierre-André
2011-01-01
With the progress of measurement apparatus and the development of automatic sensors it is not unusual anymore to get thousands of samples of observations taking values in high dimension spaces such as functional spaces. In such large samples of high dimensional data, outlying curves may not be uncommon and even a few individuals may corrupt simple statistical indicators such as the mean trajectory. We focus here on the estimation of the geometric median which is a direct generalization of the real median and has nice robustness properties. The geometric median being defined as the minimizer of a simple convex functional that is differentiable everywhere when the distribution has no atoms, it is possible to estimate it with online gradient algorithms. Such algorithms are very fast and can deal with large samples. Furthermore they also can be simply updated when the data arrive sequentially. We state the almost sure consistency and the L2 rates of convergence of the stochastic gradient estimator as well as the ...
SUFIAN, Fadzlan; Abdul Majid, Muhamed Zulkhibri; Haron, Razali
2007-01-01
This paper provides event study window analysis of pre- and post-merger bank performance in Singapore by employing Financial Ratio Analysis and Data Envelopment Analysis (DEA) approach. The findings from financial ratio analysis suggests that the merger has not resulted in a higher profitability of Singaporean banking groups post-merger, which could be attributed to the higher costs incurred. However, the merger has resulted in higher Singaporean banking groups’ mean overall efficiency. In mo...
Nicolas Rispail; Diego Rubiales
2015-01-01
Fusarium wilts are widespread diseases affecting most agricultural crops. In absence of efficient alternatives, sowing resistant cultivars is the preferred approach to control this disease. However, actual resistance sources are often overcome by new pathogenic races, forcing breeders to continuously search for novel resistance sources. Selection of resistant accessions, mainly based on the evaluation of symptoms at timely intervals, is highly time-consuming. Thus, we tested the potential of ...
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the
Energy Technology Data Exchange (ETDEWEB)
Bengel, F.M.; Nekolla, S.; Schwaiger, M. [Technische Univ. Muenchen (Germany). Nuklearmedizinische Klinik und Poliklinik; Permanetter, B. [Abteilung Innere Medizin, Kreiskrankenhaus Wasserburg/Inn (Germany); Ungerer, M. [Technische Univ. Muenchen (Germany). 1. Medizinische Klinik und Poliklinik
2000-03-01
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with {sup 11}C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for {sup 11}C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%{+-}10% and end-diastolic volume was 92{+-}28 ml/m{sup 2} (vs 64%{+-}7% and 55{+-}8 ml/m{sup 2} in normals, P<0.001). Myocardial oxidative metabolism, reflected by k(mono), was significantly lower compared with that in normals (0.040{+-}0.011/min vs 0.060{+-} 0.015/min; P<0.003). The SWI (1674{+-}761 vs 4736{+-} 895 mmHg x ml/m{sup 2}; P<0.001) and the WMI as an estimate of efficiency (2.98{+-}1.30 vs 6.20{+-}2.25 x 10{sup 6} mmHg x ml/m{sup 2}; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
Directory of Open Access Journals (Sweden)
О. М. Рева
1999-09-01
Full Text Available Substantiated is the possibility of applying cybernetic methods of information chains for analysis of efficiency of structural organization of dispatcher team. To investigate it we have composed and solved a system of 10th order linear equation (used are direct current information chains . It is proved that the general scheme must be decomposed, on the one hand, into a group of operative planning and management on the other hand. Worked out are the requirement to the group members
Li, Guiying; Liu, Xiaolu; An, Taicheng; Wong, Po Keung; Zhao, Huijun
2016-05-15
A new method to estimate the photocatalytic (PC) and photoelectrocatalytic (PEC) mineralization efficiencies of large molecule biological compounds with unknown chemical formula in water was firstly developed and experimentally validated. The method employed chemical oxidation under the standard dichromate chemical oxygen demand (COD) conditions to obtain QCOD values of model compounds with unknown chemical formula. The measured QCOD values were used as the reference to replace QCOD values of model compounds for calculation of the mineralization efficiencies (in %) by assuming the obtained QCOD values are the measure of the theoretical charge required for the complete mineralization of organic pollutants. Total organic carbon (TOC) was also employed as a reference to confirm the mineralization capacity of dichromate chemical oxidation. The developed method was applied to determine the degradation extent of model compounds, such as bovine serum albumin (BSA), lecithin and bacterial DNA, by PC and PEC. Incomplete PC mineralization of all large molecule biological compounds was observed, especially for BSA. But the introduction of electrochemical technique into a PC oxidation process could profoundly improve the mineralization efficiencies of model compounds. PEC mineralization efficiencies of bacterial DNA was the highest, while that of lecithin was the lowest. Overall, PEC degradation method was found to be much effective than PC method for all large molecule biological compounds investigated, with PEC/PC mineralization ratios followed an order of BSA > lecithin > DNA. PMID:26994335
Estimation de l’Efficience Technique dans le Secteur de l’Enseignement Supérieur en Tunisie
Boujelbène, Younes; Maalej, Ali; Khayati, Anis
2013-01-01
Ce papier traite de la mesure de l’efficience technique dans le secteur de l’enseignement supérieur en Tunisie. Pour cela, nous utilisons la méthodologie frontière dans sa version paramétrique et non paramétrique. Ainsi, nous estimons une frontière de production pour 108 institutions d’enseignement supérieur, gérés par le seul ministère de tutelle, pour les années universitaires 2004/05 et 2005/06. Nous avons démontré que dans ces établissements, le taux d’encadrement enseignants et le budget...
Frolov, V M; Tereshkin, V A; Sotskaia, Ia A; Peresadin, N A; Kruglova, O V
2013-03-01
Efficiency of reamberin and cycloferon application combination at the patients with the heavy form of acute tonsillitis was investigated. It is set that cycloferon and reamberin application in the complex of treatment of the patients with this pathology is instrumental in normalization of the general state and feel of patients, liquidation of both commontoxic syndrome and local inflammatory displays in pharynx, and also normalization of the studied biochemical and immunological indexes. Application of cycloferon and reamberin provides the decline of "average molecules" and malon dialdehyde level to norm, that testifies about liquidation endogenous "metabolic" intoxication syndrome, and also instrumental in normalization of phagocytes activity of monocytes indexes, that describe normalize operating of the indicated preparation on the macrophage phagocytes system. PMID:24605622
DeVries, R. J.; Hann, D. A.; Schramm, H.L., Jr.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
Chang, Yin-Jung; Shih, Ko-Han
2016-05-01
Internal photoemission (IPE) across an n-type Schottky junction due to standard AM1.5G solar illumination is quantified with practical considerations for Cu, Ag, and Al under direct and fully nondirect transitions, all in the context of the constant matrix element approximation. Under direct transitions, photoemitted electrons from d bands dominate the photocurrent and exhibit a strong dependence on the barrier energy ΦB but are less sensitive to the change in the metal thickness. Photocurrent is shown to be nearly completely contributed by s-state electrons in the fully nondirect approximation that offers nearly identical results as in the direct transition for metals having a free-electron-like band structure. Compared with noble metals, Al-based IPE has the highest quantum yield up to about 5.4% at ΦB = 0.5 eV and a maximum power conversion efficiency of approximately 0.31% due mainly to its relatively uniform and wide Pexc energy spectral width. Metals (e.g., Ag) with a larger interband absorption edge are shown to outperform those with shallower d-bands (e.g., Cu and Au).
Zou, C X; Lively, F O; Wylie, A R G; Yan, T
2016-04-01
Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; P<0.001) indicated values for net energy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; P<0.001). There were positive linear relationships between N intake and N outputs in manure, and manure N accounted for 0.923 of the N intake. The present results provide approaches to predict maintenance energy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems. PMID:26593693
Postigo, Cristina; López de Alda, María José; Barceló, Damià
2010-01-01
Drugs of abuse and their metabolites have been recently recognized as environmental emerging organic contaminants. Assessment of their concentration in different environmental compartments is essential to evaluate their potential ecotoxicological effects. It also constitutes an indirect tool to estimate drug abuse by the population at the community level. The present work reports for the first time the occurrence of drugs of abuse and metabolites residues along the Ebro River basin (NE Spain) and also evaluates the contribution of sewage treatment plants (STPs) effluents to the presence of these chemicals in natural surface waters. Concentrations measured in influent sewage waters were used to back calculate drug usage at the community level in the main urban areas of the investigated river basin. The most ubiquitous and abundant compounds in the studied aqueous matrices were cocaine, benzoylecgonine, ephedrine and ecstasy. Lysergic compounds, heroin, its metabolite 6-monoacetyl morphine, and Delta(9)-tetradhydrocannabinol were the substances less frequently detected. Overall, total levels of the studied illicit drugs and metabolites observed in surface water (in the low ng/L range) were one and two orders of magnitude lower than those determined in effluent (in the ng/L range) and influent sewage water (microg/L range), respectively. The investigated STPs showed overall removal efficiencies between 45 and 95%. Some compounds, such as cocaine and amphetamine, were very efficiently eliminated (>90%) whereas others, such as ecstasy, methamphetamine, nor-LSD, and THC-COOH where occasionally not eliminated at all. Drug consumption estimates pointed out cocaine as the most abused drug, followed by cannabis, amphetamine, heroin, ecstasy and methamphetamine, which slightly differs from national official estimates (cannabis, followed by cocaine, ecstasy, amphetamine and heroin). Extrapolation of the consumption data obtained for the studied area to Spain points out a total
Ionkin, I. L.; Ragutkin, A. V.; Luning, B.; Zaichenko, M. N.
2016-06-01
For enhancement of the natural gas utilization efficiency in boilers, condensation heat utilizers of low-potential heat, which are constructed based on a contact heat exchanger, can be applied. A schematic of the contact heat exchanger with a humidifier for preheating and humidifying of air supplied in the boiler for combustion is given. Additional low-potential heat in this scheme is utilized for heating of the return delivery water supplied from a heating system. Preheating and humidifying of air supplied for combustion make it possible to use the condensation utilizer for heating of a heat-transfer agent to temperature exceeding the dewpoint temperature of water vapors contained in combustion products. The decision to mount the condensation heat utilizer on the boiler was taken based on the preliminary estimation of the additionally obtained heat. The operation efficiency of the condensation heat utilizer is determined by its structure and operation conditions of the boiler and the heating system. The software was developed for the thermal design of the condensation heat utilizer equipped by the humidifier. Computation investigations of its operation are carried out as a function of various operation parameters of the boiler and the heating system (temperature of the return delivery water and smoke fumes, air excess, air temperature at the inlet and outlet of the condensation heat utilizer, heating and humidifying of air in the humidifier, and portion of the circulating water). The heat recuperation efficiency is estimated for various operation conditions of the boiler and the condensation heat utilizer. Recommendations on the most effective application of the condensation heat utilizer are developed.
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments
Directory of Open Access Journals (Sweden)
John W. Jones
2015-09-01
Full Text Available The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE, is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral
Directory of Open Access Journals (Sweden)
Constantin E Uhlig
Full Text Available AIMS: To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts. METHODS: Invitations to participate in a survey were distributed via e-mail to employees of two university hospitals (E1 and E2 and to members of a medical association (E3, as a link placed in a special text on the municipal homepage regularly read by the administrative employees of two cities (H1 and H2, and paper-based to workers at an automobile enterprise (P1 and college (P2 and senior (P3 students. The main parameters analyzed included the numbers of invited and actual participants, and the time and cost to complete the survey. Statistical analysis was descriptive, except for the Kruskal-Wallis-H-test, which was used to compare the three recruitment methods. Cost efficiencies were compared and extrapolated to different-sized cohorts. RESULTS: The ratios of completely answered questionnaires to distributed questionnaires were between 81.5% (E1 and 97.4% (P2. Between 6.4% (P1 and 57.0% (P2 of the invited participants completely answered the questionnaires. The costs per completely answered questionnaire were $0.57-$1.41 (E1-3, $1.70 and $0.80 for H1 and H2, respectively, and $3.36-$4.21 (P1-3. Based on our results, electronic surveys with 10, 20, 30, or 42 questions would be estimated to be most cost (and time efficient if more than 101.6-225.9 (128.2-391.7, 139.8-229.2 (93.8-193.6, 165.8-230.6 (68.7-115.7, or 188.2-231.5 (44.4-72.7 participants were required, respectively. CONCLUSIONS: The study efficiency depended on the technical modalities of the survey methods and engagement of the participants. Depending on our study design, our results suggest that in similar projects that will certainly have more than two to three hundred required participants, the most efficient way of conducting a questionnaire-based survey is likely via the Internet with a digital questionnaire
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
International Nuclear Information System (INIS)
Problems of software management of automation control system of electron linac, the main purpose of which is to ensure data acquisition, processing and representation for all the levels of accelerating complex control, are considered. The software comprises 14 computer codes with total volume of about 13 thousand commands and it is oriented for the following task conduction: the accelerator operation control, diagnostics, failure forecasting and investigation of the accelerator systems, forecasting of the accelerator operation on the whole, repair work planning. The system efficiency is estimated on the basis of data recieved during several years of test and industrial operation. Thus, operation cycles during 1975-1977 when the first stage of information-measurement system reached its designed capacity, the average time of mean-cycles-between-failures increased by 30%. About half the time of mean-cycles-between-failures may be referred to the direct or indirect effects caused by the information-measurement system
Menenti, M. C.; Jia, L.
2007-12-01
Climate variability implies variable water availability and drought hazards in many parts of the world. Vegetation species, especially taking into account significant biodiversity, react very differently to water scarcity. The amount of biomass produced per unit volume of water transpired changes significantly across and within species, according to genetic characteristics. The latter represent the most significant resource towards adaptation of agriculture to climate change. On the other hand it remains difficult both to assess Water Use Efficiency (WUE) at larger spatial scales and to model WUE in a way sufficiently simple to allow inclusion in the large area and global climate models used to assess impacts and evaluate adaptation options. WUE is a ratio of extensive quantities and provides directly an upscaling constraint for all variables involved in related parameterizations. The first challenge is to observe WUE at any spatial scale larger than a single plant. At this scale biomass can be determined by direct sampling and transpiration can be measured to a satisfactory level of accuracy with sap- flow devices. At larger spatial scales no direct experimental method is available and heterogeneity makes attribution of estimated total water flux to specific vegetation types within the area observed rather difficult. Use of radiometric data collected from aircrafts and satellites is a practical approach to estimate and map both water flux and biomass, thus leading to WUE. When using satellite data this approach makes frequent observations and monitoring in time possible. This review presentation summarizes current approaches and trends in the estimation of vegetation-atmosphere water exchange and of biomass. Analysis of time series of indicators of vegetation response to water availability in terms of both ET and biomass will be presented on the basis of case- studies in Africa, South America, Europe and China.
di Sarra, Alcide; Fuà, Daniele; Meloni, Daniela
2013-04-01
This study is based on measurements made at ENEA Station for Climate Observations (35.52° N, 12.63° E, 50 m asl) on the island of Lampedusa, in the Southern part of the Central Mediterranean. A quasi periodic oscillation of aerosol optical depth, column water vapour, shortwave (SW) and photosynthetic active radiation (PAR) is observed to occur during the morning of 7 September 2005. The quasi-periodic wave is present from about 6 to 10 UT, with solar zenith angles (SZA) varying between 77.5° and 37.2° . In this period the aerosol optical depth at 500 nm, ?, varies between 0.29 and 0.41; the column water vapour, cwv, varies between 2.4 and 2.8 cm. The oscillations of ? and cwv are in phase, while the modulation of the downward surface irradiances is in opposition of phase with respect to ? and cwv. The period of the oscillation is about 13 min. The oscillation is attributed to the propagation of a gravity wave which modulates the structure of the planetary boundary layer. The measured aerosol optical properties are typical of cases dominated by Saharan dust, with the Ångström exponent comprised between 0.5 and 0.6. The backtrajectory analysis for that day shows that airmasses overpass Northern Libya (trajectories arriving below 2000 m), Tunisia and Northern Algeria (trajectories arriving above 2000 m), carrying Saharan dust particles to Lampedusa. The combined modulation of downward irradiance, water vapour column, and aerosol optical depth is used to estimate the aerosol effect on the irradiance. From the irradiance-optical depth relation, the aerosol surface direct forcing efficiency (FE) is derived, under the assumption that during the measurement interval the aerosol microphysical properties do not appreciably change. As a first step, all SW irradiances are reported to the same cwv content (2.6 cm), by using radiative transfer model calculations. Reference curves describing the downward SW and PAR irradiances are constructed by using measurements obtained
International Nuclear Information System (INIS)
China's annual crude steel production in 2010 was 638.7 Mt accounting for nearly half of the world's annual crude steel production in the same year. Around 461 TWh of electricity and 14,872 PJ of fuel were consumed to produce this quantity of steel. We identified and analyzed 23 energy efficiency technologies and measures applicable to the processes in China's iron and steel industry. Using a bottom-up electricity CSC (Conservation Supply Curve) model, the cumulative cost-effective electricity savings potential for the Chinese iron and steel industry for 2010–2030 is estimated to be 251 TWh, and the total technical electricity saving potential is 416 TWh. The CO2 emissions reduction associated with cost-effective electricity savings is 139 Mt CO2 and the CO2 emission reduction associated with technical electricity saving potential is 237 Mt CO2. The FCSC (Fuel CSC) model for the Chinese iron and steel industry shows cumulative cost-effective fuel savings potential of 11,999 PJ, and the total technical fuel saving potential is 12,139. The CO2 emissions reduction associated with cost-effective and technical fuel savings is 1191 Mt CO2 and 1205 Mt CO2, respectively. In addition, a sensitivity analysis with respect to the discount rate used is conducted. - Highlights: ► Estimation of energy saving potential in the entire Chinese steel industry. ► Development of the bottom-up technology-rich Conservation Supply Curve models. ► Discussion of different approaches for developing Conservation Supply Curves. ► Primary energy saving over 20 years equal to 72% of primary energy of Latin America
International Nuclear Information System (INIS)
The relative biological effectiveness (RBE) of low energy neutrons for the induction of various abnormalities in Tradescantia stamen hair mutation (Trad-SH) assay was studied using two clones (T-4430 and T-02), heterozygous for flower color. Dose response relationship for gene mutations induced in somatic cells of Trad-SH were investigated after irradiation with a mixed neutron beam of the Brookhaven Medical Research Reactor (BMRR), currently used in a clinical trial of boron neutron capture therapy (BNCT) for glioblastoma. To establish the RBE of the BMRR beam in the induction of various biological end-points in Tradescantia, irradiation with various doses of γ-rays was also performed. After irradiation all plants were cultivated several days at Brookhaven National Laboratory (BNL), then transported to Poland for screening the biological end-points. Due to the post-exposure treatment, all plants showed high levels of lethal events and alteration of the cell cycle. Plants of clone 4430 were more reactive to post-treatment conditions, resulting in decreased blooming efficiency that affected the statistics. Slope coefficients estimated from the dose response curves for gene mutation frequencies allowed the evaluation of ranges for the maximal RBE values of the applied beam vs. γ rays as 6.0 and 5.4 for the cells of T-02 and T-4430, respectively. Estimated fraction of doses from neutrons and corresponding biological effects for the clones T-02 and T-4430 allowed to evaluate the RBE values for neutrons part in the beam as 32.3 and 45.4, respectively. (author)
Magalov, Zaur; Shitzer, Avraham; Degani, David
2016-10-01
This study presents an efficient, fast and accurate method for estimating the two-dimensional temperature distributions around multiple cryo-surgical probes. The identical probes are inserted into the same depth and are operated simultaneously and uniformly. The first step in this method involves numerical derivation of the temporal performance data of a single probe, embedded in a semi-infinite, tissue-like medium. The results of this derivation are approximated by algebraic expressions that form the basis for computing the temperature distributions of multiple embedded probes by combining the data of a single probe. Comparison of isothermal contours derived by this method to those computed numerically for a variety of geometrical cases, up to 15 inserted probes and 2-10 min times of operation, yielded excellent results. Since this technique obviates the solution of the differential equations of multiple probes, the computational time required for a particular case is several orders of magnitude shorter than that needed for obtaining the full numerical solution. Blood perfusion and metabolic heat generation rates are demonstrated to inhibit the advancement of isothermal fronts. Application of this method will significantly shorten computational times without compromising the accuracy of the results. It may also facilitate expeditious consideration of the advantages of different modes of operation and the number of inserted probes at the early design stage. PMID:26963943
Seog-Chan Oh; Alfred J. Hildreth
2014-01-01
The car manufacturing industry, one of the largest energy consuming industries, has been making a considerable effort to improve its energy intensity by implementing energy efficiency programs, in many cases supported by government research or financial programs. While many car manufacturers claim that they have made substantial progress in energy efficiency improvement over the past years through their energy efficiency programs, the objective measurement of energy efficiency improvement has...
Asympotic behavior of the total length of external branches for Beta-coalescents
Dhersin, Jean-Stephane
2012-01-01
We consider a ${\\Lambda}$-coalescent and we study the asymptotic behavior of the total length $L^{(n)}_{ext}$ of the external branches of the associated $n$-coalescent. For Kingman coalescent, i.e. ${\\Lambda}={\\delta}_0$, the result is well known and is useful, together with the total length $L^{(n)}$, for Fu and Li's test of neutrality of mutations% under the infinite sites model asumption . For a large family of measures ${\\Lambda}$, including Beta$(2-{\\alpha},{\\alpha})$ with $0<\\alpha<1$, M{\\"o}hle has proved asymptotics of $L^{(n)}_{ext}$. Here we consider the case when the measure ${\\Lambda}$ is Beta$(2-{\\alpha},{\\alpha})$, with $1<\\alpha<2$. We prove that $n^{{\\alpha}-2}L^{(n)}_{ext}$ converges in $L^2$ to $\\alpha(\\alpha-1)\\Gamma(\\alpha)$. As a consequence, we get that $L^{(n)}_{ext}/L^{(n)}$ converges in probability to $2-\\alpha$. To prove the asymptotics of $L^{(n)}_{ext}$, we use a recursive construction of the $n$-coalescent by adding individuals one by one. Asymptotics of the distributi...
Dubrovskaya, Ekaterina; Turkovskaya, Olga
2010-05-01
Estimation of the efficiency of hydrocarbon mineralization in soil by measuring CO2-emission and variations in the isotope composition of carbon dioxide E. Dubrovskaya1, O. Turkovskaya1, A. Tiunov2, N. Pozdnyakova1, A. Muratova1 1 - Institute of Biochemistry and Physiology of Plants and Microorganisms, RAS, Saratov, 2 - A.N. Severtsov Institute of Ecology and Evolution, RAS, Moscow, Russian Federation Hydrocarbon mineralization in soil undergoing phytoremediation was investigated in a laboratory experiment by estimating the variation in the 13С/12С ratio in the respired СО2. Hexadecane (HD) was used as a model hydrocarbon pollutant. The polluted soil was planted with winter rye (Secale cereale) inoculated with Azospirillum brasilense strain SR80, which combines the abilities to promote plant growth and to degrade oil hydrocarbon. Each vegetated treatment was accompanied with a corresponding nonvegetated one, and uncontaminated treatments were used as controls. Emission of carbon dioxide, its isotopic composition, and the residual concentration of HD in the soil were examined after two and four weeks. At the beginning of the experiment, the CO2-emission level was higher in the uncontaminated than in the contaminated soil. After two weeks, the quantity of emitted carbon dioxide decreased by about three times and did not change significantly in all uncontaminated treatments. The presence of HD in the soil initially increased CO2 emission, but later the respiration was reduced. During the first two weeks, nonvegetated soil had the highest CO2-emission level. Subsequently, the maximum increase in respiration was recorded in the vegetated contaminated treatments. The isotope composition of plant material determines the isotope composition of soil. The soil used in our experiment had an isotopic signature typical of soils formed by C3 plants (δ13C,-22.4‰). Generally, there was no significant fractionation of the carbon isotopes of the substrates metabolized by the
Barrash, W.; Cardiff, M. A.; Kitanidis, P. K.
2012-12-01
The distribution of hydraulic conductivity (K) is a major control on groundwater flow and contaminant transport. Our limited ability to determine 3D heterogeneous distributions of K is a major reason for increased costs and uncertainties associated with virtually all aspects of groundwater contamination management (e.g., site investigations, risk assessments, remediation method selection/design/operation, monitoring system design/operation). Hydraulic tomography (HT) is an emerging method for directly estimating the spatially variable distribution of K - in a similar fashion to medical or geophysical imaging. Here we present results from 3D transient field-scale experiments (3DTHT) which capture the heterogeneous K distribution in a permeable, moderately heterogeneous, coarse fluvial unconfined aquifer at the Boise Hydrogeophysical Research Site (BHRS). The results are verified against high-resolution K profiles from multi-level slug tests at BHRS wells. The 3DTHT field system for well instrumentation and data acquisition/feedback is fully modular and portable, and the in-well packer-and-port system is easily assembled and disassembled without expensive support equipment or need for gas pressurization. Tests are run for 15-20 min and the aquifer is allowed to recover while the pumping equipment is repositioned between tests. The tomographic modeling software developed uses as input observations of temporal drawdown behavior from each of numerous zones isolated in numerous observation wells during a series of pumping tests conducted from numerous isolated intervals in one or more pumping wells. The software solves for distributed K (as well as storage parameters Ss and Sy, if desired) and estimates parameter uncertainties using: a transient 3D unconfined forward model in MODFLOW, the adjoint state method for calculating sensitivities (Clemo 2007), and the quasi-linear geostatistical inverse method (Kitanidis 1995) for the inversion. We solve for K at >100,000 sub-m3
Sannigrahi, Srikanta; Sen, Somnath; Paul, Saikat
2016-04-01
Net Primary Production (NPP) of mangrove ecosystem and its capacity to sequester carbon from the atmosphere may be used to quantify the regulatory ecosystem services. Three major group of parameters has been set up as BioClimatic Parameters (BCP): (Photosynthetically Active Radiation (PAR), Absorbed PAR (APAR), Fraction of PAR (FPAR), Photochemical Reflectance Index (PRI), Light Use Efficiency (LUE)), BioPhysical Parameters (BPP) :(Normalize Difference Vegetation Index (NDVI), scaled NDVI, Enhanced Vegetation Index (EVI), scaled EVI, Optimised and Modified Soil Adjusted Vegetation Index (OSAVI, MSAVI), Leaf Area Index (LAI)), and Environmental Limiting Parameters (ELP) (Temperature Stress (TS), Land Surface Water Index (LSWI), Normalize Soil Water Index (NSWI), Water Stress Scalar (WS), Inversed WS (iWS) Land Surface Temperature (LST), scaled LST, Vapor Pressure Deficit (VPD), scaled VPD, and Soil Water Deficit Index (SWDI)). Several LUE models namely Carnegie Ames Stanford Approach (CASA), Eddy Covariance - LUE (EC-LUE), Global Production Efficiency Model (GloPEM), Vegetation Photosynthesis Model (VPM), MOD NPP model, Temperature and Greenness Model (TG), Greenness and Radiation model (GR) and MOD17 was adopted in this study to assess the spatiotemporal nature of carbon fluxes. Above and Below Ground Biomass (AGB & BGB) was calculated using field based estimation of OSAVI and NDVI. Microclimatic zonation has been set up to assess the impact of coastal climate on environmental limiting factors. MODerate Resolution Imaging Spectroradiometer (MODIS) based yearly Gross Primary Production (GPP) and NPP product MOD17 was also tested with LUE based results with standard model validation statistics: Root Mean Square of Error (RMSE), Mean Absolute Error (MEA), Bias, Coefficient of Variation (CV) and Coefficient of Determination (R2). The performance of CASA NPP was tested with the ground based NPP with R2 = 0.89 RMSE = 3.28 P = 0.01. Among the all adopted models, EC
Energy Technology Data Exchange (ETDEWEB)
Gonzales, John
2015-04-02
Presentation by Senior Engineer John Gonzales on Evaluating Investments in Natural Gas Vehicles and Infrastructure for Your Fleet using the Vehicle Infrastructure Cash-flow Estimation (VICE) 2.0 model.
National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...
Monson, D. J.
1978-01-01
Based on expected advances in technology, the maximum system efficiency and minimum specific mass have been calculated for closed-cycle CO and CO2 electric-discharge lasers (EDL's) and a direct solar-pumped laser in space. The efficiency calculations take into account losses from excitation gas heating, ducting frictional and turning losses, and the compressor efficiency. The mass calculations include the power source, radiator, compressor, fluids, ducting, laser channel, optics, and heat exchanger for all of the systems; and in addition the power conditioner for the EDL's and a focusing mirror for the solar-pumped laser. The results show the major component masses in each system, show which is the lightest system, and provide the necessary criteria for solar-pumped lasers to be lighter than the EDL's. Finally, the masses are compared with results from other studies for a closed-cycle CO2 gasdynamic laser (GDL) and the proposed microwave satellite solar power station (SSPS).
Lise Tole; Gary Koop
2011-01-01
This paper uses data on the worldÂ's copper mining industry to measure the impact on efficiency of the adoption of the ISO 14001 environmental standard. Anecdotal and case study literature suggests that fiÂ…rms are motivated to adopt this standard so as to achieve greater efficiency through changes in operating procedures and processes. Using plant level panel data from 1992-2007 on most of the worldÂ's industrial copper mines, the study uses stochastic frontier methods to investigate the eff...
Latypov, A. F.
2009-03-01
The fuel economy was estimated at boost trajectory of aerospace plane during energy supply to the free stream. Initial and final velocities of the flight were given. A model of planning flight above cold air in infinite isobaric thermal wake was used. The comparison of fuel consumption was done at optimal trajectories. The calculations were done using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was constructed in the first part of the paper for estimating the ramjet thrust and specific impulse. To estimate the aerodynamic drag of aircraft a quadratic dependence on aerodynamic lift is used. The energy for flow heating is obtained at the sacrifice of an equivalent decrease of exergy of combustion products. The dependencies are obtained for increasing the range coefficient of cruise flight at different Mach numbers. In the second part of the paper, a mathematical model is presented for the boost part of the flight trajectory of the flying vehicle and computational results for reducing the fuel expenses at the boost trajectory at a given value of the energy supplied in front of the aircraft.
R. Mechler
2016-01-01
There is a lot of rhetoric suggesting that disaster risk reduction (DRR) pays, yet surprisingly little in the way of hard facts. This review paper examines the evidence regarding the economic efficiency of DRM based on CBA. Specifically, it addresses the following questions: What can be said about current and best practice regarding CBA for DRR including limitations and alternatives? And, what, if at all, can be said in terms of quantitative insight for informing policy and practice? The revi...
International Nuclear Information System (INIS)
Populations of salmonid smolts migrating through the hydropower system on the Columbia River incur some rate of mortality at each dam. To set priorities on options to minimize losses and provide safe passage of the smolts at dams, estimates of smolt survival at each dam are necessary. Two methods have been developed to obtain these survival estimates: the direct and the indirect method. With the indirect method, a test group of fish is released upstream and a-control group is released downstream from the area of interest. With the direct method, a single release of fish above the area of interest is used, with subsequent recovery below the area of interest. In 1988, the National Marine Fisheries Service (NMFS) began a 2-year study at McNary Dam to address possible sources of variation associated with the direct method of obtaining survival estimates. Five study objectives were established to determine whether (1) fish from the Columbia and Snake Rivers mixed as they migrated to McNary Dam (release-location tests); (2) collection rates for Columbia and Snake River stocks were the same (river-of-origin tests); (3) test-group release timing influenced recovery rates (time-of-release tests); (4) a collection-rate bias existed from use of test fish previously guided and collected at the recovery site (tests of previously guided fish); and (5) recovery rates obtained with PIT-tagged fish were comparable to those previously obtained with freeze-branded fish (PIT-tag vs. freeze-brand technology)
International Nuclear Information System (INIS)
The purpose of this study was to examine the individual susceptibility to UV-C induced DNA damage in lymphocytes of Greece people occupationally exposed to pesticides and from reference group with reported no occupational exposure. We also analyzed if there are any differences in the cellular repair capacity between both groups. Lymphocytes were isolated from fresh blood samples collected in Greece from 50 persons recognized as non-exposed to pesticides and from 50 farmers at the end of the spraying season. The average age in exposed to pesticide and reference group was 42.08 and 42.19, respectively. Frozen lymphocytes were transported in a dry ice into DREB laboratory for DNA damage analysis. The DNA damage was measured with the application of single cell gel electrophoresis method (SCGE technique). Our results show that there was not any statistically significant difference concerning the level of the DNA damage detected in defrosted lymphocytes between exposed and non-exposed group. The photoproducts excision efficiency after exposure to UV-C (6 Jm2) and difference in repair capacity by incubation in present and absent of PHA were also studied. There were no statistically significant differences detected directly after UV irradiation between both investigated groups (p >0.1). However, for group exposed to pesticide the ratio of DNA damage measured right after exposition and two hours later was higher (32.19) comparing to reference group (28.60). It may suggest that in exposed group photoproducts excision efficiency was higher or the rejoining rates of the breaks was lower. The differences between repair efficiency observed in lymphocytes from group exposed and non-exposed to pesticides (with or without stimulation to division) were also statistically insignificant (for Tail Length, Tail DNA and Tail moment parameters - p >0.1). Statistically significant differences in DNA damage repair capacities were observed (for all analyzed parameters) between lymphocytes
Herrero, Rebeca; Victoria, Marta; Domínguez, César; Askins, Stephen; Antón, Ignacio; Sala, Gabriel
2015-09-01
This paper presents the mechanisms of efficiency losses that have to do with the non-uniformity of the irradiance over the multi-junction solar cells and different measurement techniques used to investigate them. To show the capabilities of the presented techniques, three different concentrators (that consist of an acrylic Fresnel lens, different SOEs and a lattice matched multi-junction cell) are evaluated. By employing these techniques is possible to answer some critical questions when designing concentrators as for example which degree of non-uniformity the cell can withstand, how critical the influence of series resistance is, or what kind of non-uniformity (spatial or spectral) causes more losses.
R. Kelley Pace
1998-01-01
This article provides a matrix representation of the adjustment grid estimator. From this representation, one can invoke the Gauss-Mrkov theorem to examine the efficiency of ordinary least squares (OLS) and the grid estimator that uses OLS estimates of the adjustments (the "plug-in" grid method). In addition, this matrix representation suggests a generalized least squares version of the grid method, labeled herin as the total grid estimator. Based on the empirical experiments, the total grid ...
Directory of Open Access Journals (Sweden)
T.J. Akingbade
2014-09-01
Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.
International Nuclear Information System (INIS)
To evaluate the effectiveness of the iterative decomposition of water and fat with echo asymmetric and least-squares estimation (IDEAL) MRI to quantify tumour infiltration into the lumbar vertebrae in myeloma patients without visible focal lesions. The lumbar spine was examined with 3 T MRI in 24 patients with multiple myeloma and in 26 controls. The fat-signal fraction was calculated as the mean value from three vertebral bodies. A post hoc test was used to compare the fat-signal fraction in controls and patients with monoclonal gammopathy of undetermined significance (MGUS), asymptomatic myeloma or symptomatic myeloma. Differences were considered significant at P 2-microglobulin-to-albumin ratio were entered into the discriminant analysis. Fat-signal fractions were significantly lower in patients with symptomatic myelomas (43.9 ±19.7%, P 2-microglobulin-to-albumin ratio facilitated discrimination of symptomatic myeloma from non-symptomatic myeloma in patients without focal bone lesions. circle A new magnetic resonance technique (IDEAL) offers new insights in multiple myeloma. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Takasu, Miyuki; Tani, Chihiro; Sakoda, Yasuko; Ishikawa, Miho; Tanitame, Keizo; Date, Shuji; Akiyama, Yuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Graduate School of Biomedical Sciences, Hiroshimashi (Japan); Sakai, Akira [Hiroshima University, Department of Hematology and Oncology, Research Institute for Radiation Biology and Medicine, Hiroshimashi (Japan); Asaoku, Hideki [Hiroshima Red Cross Hospital and Atomic-bomb Survivors Hospital, Department of Hematology, Hiroshimashi (Japan); Kajima, Toshio [Kajima Clinic, Hiroshimaken (Japan)
2012-05-15
To evaluate the effectiveness of the iterative decomposition of water and fat with echo asymmetric and least-squares estimation (IDEAL) MRI to quantify tumour infiltration into the lumbar vertebrae in myeloma patients without visible focal lesions. The lumbar spine was examined with 3 T MRI in 24 patients with multiple myeloma and in 26 controls. The fat-signal fraction was calculated as the mean value from three vertebral bodies. A post hoc test was used to compare the fat-signal fraction in controls and patients with monoclonal gammopathy of undetermined significance (MGUS), asymptomatic myeloma or symptomatic myeloma. Differences were considered significant at P < 0.05. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio were entered into the discriminant analysis. Fat-signal fractions were significantly lower in patients with symptomatic myelomas (43.9 {+-}19.7%, P < 0.01) than in the other three groups. Discriminant analysis showed that 22 of the 24 patients (92%) were correctly classified into symptomatic or non-symptomatic myeloma groups. Fat quantification using the IDEAL sequence in MRI was significantly different when comparing patients with symptomatic myeloma and those with asymptomatic myeloma. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio facilitated discrimination of symptomatic myeloma from non-symptomatic myeloma in patients without focal bone lesions. circle A new magnetic resonance technique (IDEAL) offers new insights in multiple myeloma. (orig.)
Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew
2009-01-01
Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.
Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee
2016-04-01
In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. PMID:26762649
Multi-directional program efficiency
DEFF Research Database (Denmark)
Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth
2016-01-01
approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system is...... implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V with a...
Estimation of line efficiency by aggregation
Koster, de, MBM René
1987-01-01
textabstractPresents a multi-stage flow lines with intermediate buffers approximated by two-stage lines using repeated aggregation. Characteristics of the aggregation method; Problems associated with the analysis and design of production lines.
Estimation of Line Efficiency by Aggregation
M.B.M. de Koster (René)
1987-01-01
textabstractPresents a multi-stage flow lines with intermediate buffers approximated by two-stage lines using repeated aggregation. Characteristics of the aggregation method; Problems associated with the analysis and design of production lines.
Virtual Sensors: Efficiently Estimating Missing Spectra
National Aeronautics and Space Administration — Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding...
DEFF Research Database (Denmark)
Lindström, Erik; Ionides, Edward; Frydendall, Jan;
2012-01-01
-Rao efficient. The proposed estimator is easy to implement as it only relies on non-linear filtering. This makes the framework flexible as it is easy to tune the implementation to achieve computational efficiency. This is done by using the approximation of the score function derived from the theory on Iterative...
International Nuclear Information System (INIS)
Whichever way the local distribution company (LDC) tries to convert residential customers to gas or expand their use of it, the process itself has become essential for the natural gas industry. The amount of gas used by each residential customer has been decreasing for 25 years -- since the energy crisis of the early 1970s. It's a direct result of better-insulated homes and more-efficient gas appliances, and that trend is continuing. So, LDCs have a choice of either finding new users and uses for gas, or recognizing that their throughput per customer is going to continue declining. The paper discusses strategies that several gas utilities are using to increase the number of gas appliances in the customer's homes. These and other strategies keep the gas industry optimistic about the future of the residential market: A.G.A. has projected that by 2010 demand will expand, from 1994's 5.1 quadrillion Btu (quads) to 5.7 quads, even with continued improvements in appliance efficiency. That estimate, however, will depend on the industry-s utilities and whether they keep converting, proselytizing, persuading and influencing customers to use more natural gas
Re-estimation and determinants of regional technical efficiency in China%我国区域技术效率的再估计及区位因素分析
Institute of Scientific and Technical Information of China (English)
岳意定; 刘贯春; 贺磊
2013-01-01
对CSSW、CSSG、BC及KSS四种经典随机前沿模型进行了比较分析,并基于此对我国区域1997-2010年间技术效率水平进行了再估计,进一步重点研究了区域技术效率的区位因素.研究发现:(1)格兰杰因果检验、Hausman-Wu检验及随机误差项正态分布检验结果共同显示,利用对数型柯布-道格拉斯生产函数对区域技术效率进行测算时,模型本身存在内生性,前人采用BC模型得到的结果可信度低下；考虑到在内生性及技术非效率项处理上的完备性,认为KSS模型测算结果相对更加可信.(2)无论是整体还是地区,平均技术效率均呈现涨跌互动的波动趋势,与前人得出的单调递增(减)结论差异显著；2008年之前,东部地区平均技术效率最高,中部次之,西部最低,2008年之后中部地区平均技术效率赶上东部,且两者之间差距呈现扩大趋势.(3)地理位置、财政科技投入、高科技产业规模、人口素质和外商直接投资是导致当前区域技术效率差异显著的关键因素.%Based on the comparisons of the CSSW,CSSG,BC and KSS stochastic frontier analysis estimators,this paper analysis regional technical efficiency in China.Besides,we explain the trend differences among regions technical efficiency from the perspective of determinants.The main findings are:(1) Granger causality test,Hausman-Wu test and normality test of random error term show that the model itself has the endogenous problem on the basis of Cobb-Douglas production function,the results of previous research based on BC are untrusted.Considering the superiority in the technical inefficiency and endogeneity,the KSS estimator is the relatively most reliable.(2) Global and regional average technical efficiencies present fluctuant character and different from the previous conclusions significantly.Prior to 2008,the technical efficiency of eastern region is the highest,followed by central and western regions,while the central
Improving efficiency in stereology
DEFF Research Database (Denmark)
Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm;
2013-01-01
study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned...... proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.A quantidade de água no solo está disponível para as culturas de acordo com os limites de valores específicos do potencial da água. A determinação dos limites hidrofísicos, identificada como ponto de murcha permanente (permanent wilting point - PWP e capacidade de campo (field capacity - FC, envolve a escolha de
Efficiency in Microfinance Cooperatives
Directory of Open Access Journals (Sweden)
HARTARSKA, Valentina
2012-12-01
Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.
Shrinkage estimators for covariance matrices.
Daniels, M J; Kass, R E
2001-12-01
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically
Energy Technology Data Exchange (ETDEWEB)
Asociacion de Tecnicos y Profesionistas en Aplicacion Energetica, A.C. [Mexico (Mexico)
2002-06-01
In the last years much attention has been given to the polluting gas discharges, in special of those that favor the green house effect (GHE), due to the negative sequels that its concentration causes to the atmosphere, particularly as the cause of the increase in the overall temperature of the planet, which has been denominated world-wide climatic change. There are many activities that allow to lessen or to elude the GHE gas emissions, and with the main ones the so-called projects of Energy Efficiency and Renewable Energy (EE/RE) have been structured. In order to carry out a project within the frame of the MDL, it is necessary to evaluate with quality, precision and transparency, the amount of emissions of GHE gases that are reduced or suppressed thanks to their application. For that reason, in our country we tried different methodologies directed to estimate the CO{sub 2} emissions that are attenuated or eliminated by means of the application of EE/RE projects. [Spanish] En los ultimos anos se ha puesto mucha atencion a las emisiones de gases contaminantes, en especial de los que favorecen el efecto invernadero (GEI), debido a las secuelas negativas que su concentracion ocasiona a la atmosfera, particularmente como causante del aumento en la temperatura general del planeta, en lo que se ha denominado cambio climatico mundial. Existen muchas actividades que permiten aminorar o eludir las emisiones de GEI, y con las principales se han estructurado los llamados proyectos de eficiencia energetica y energia renovables (EE/ER). Para llevar a cabo un proyecto dentro del marco del MDL, es necesario evaluar con calidad, precision y transparencia, la cantidad de emisiones de GEI que se reducen o suprimen gracias a su aplicacion. Por ello, en nuestro pais ensayamos diferentes metodologias encaminadas a estimar las emisiones de CO{sub 2} que se atenuan o eliminan mediante la aplicacion de proyectos de EE/ER.
Distribution system state estimation
Wang, Haibin
With the development of automation in distribution systems, distribution SCADA and many other automated meters have been installed on distribution systems. Also Distribution Management System (DMS) have been further developed and more sophisticated. It is possible and useful to apply state estimation techniques to distribution systems. However, distribution systems have many features that are different from the transmission systems. Thus, the state estimation technology used in the transmission systems can not be directly used in the distribution systems. This project's goal was to develop a state estimation algorithm suitable for distribution systems. Because of the limited number of real-time measurements in the distribution systems, the state estimator can not acquire enough real-time measurements for convergence, so pseudo-measurements are necessary for a distribution system state estimator. A load estimation procedure is proposed which can provide estimates of real-time customer load profiles, which can be treated as the pseudo-measurements for state estimator. The algorithm utilizes a newly installed AMR system to calculate more accurate load estimations. A branch-current-based three-phase state estimation algorithm is developed and tested. This method chooses the magnitude and phase angle of the branch current as the state variable, and thus makes the formulation of the Jacobian matrix less complicated. The algorithm decouples the three phases, which is computationally efficient. Additionally, the algorithm is less sensitive to the line parameters than the node-voltage-based algorithms. The algorithm has been tested on three IEEE radial test feeders, both the accuracy and the convergence speed. Due to economical constraints, the number of real-time measurements that can be installed on the distribution systems is limited. So it is important to decide what kinds of measurement devices to install and where to install them. Some rules of meter placement based
International Nuclear Information System (INIS)
This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)
Directory of Open Access Journals (Sweden)
Francisco Cobos
2007-01-01
Full Text Available OSIRIS, the main optical (360-1000nm 1st- generation instrument for GTC, is being inte- grated. Except for some grisms and filters, all main optical components are finished and be- ing characterized. Complementing laboratory data with semi-empirical estimations, the cur- rent OSIRIS efficiency is summarized.
Isotonic inverse estimators for nonparametric deconvolution
van Es, B.; Jongbloed, G.; M. Van Zuijlen
1998-01-01
A new nonparametric estimation procedure is introduced for the distribution function in a class of deconvolution problems, where the convolution density has one discontinuity. The estimator is shown to be consistent and its cube root asymptotic distribution theory is established. Known results on the minimax risk for the estimation problem indicate the estimator to be efficient.
Institute of Scientific and Technical Information of China (English)
成林; 方文松
2015-01-01
Investigating the influencing rule of climate change on water use efficiency (WUE)of rain-fed winter wheat can offer scientific reference for agriculture adapting to climate change.Based on yield information and observed soil water data at representative stations,the historical trend of WUE is analyzed.Simulation models for meteorological yield and soil water variation quantity are established,and four different kinds of climate change scenarios,which are outputs by regional climate models of PRECIS and REGCM 4.0 are combined to estimate the probable variation trend of WUE in the future years of 2021 -2050 for rain-fed wheat.It is validated that in the basic scenario years,simulated yields by the combination of two regional climate models with meteorological yield simulation model are close to actual values,so methods for esti-mating future yield of wheat is proved feasible.Results by data analyzing shows that the average yield for representative stations varies as a cubic curve during the last 30 years of 1981-2010,and grows faster be-fore the year of 2000.Water consumption of wheat also increases with fluctuating.The average WUE val-ue of rain-fed wheat for representative stations in Gansu,Shanxi and Henan are 13.19 kg·mm-1 ·hm-2 , 12.86 kg·mm-1 ·hm-2 and 11.28 kg·mm-1 ·hm-2 ,respectively.The varying trend of WUE is similar to a quadratic curve,and the maximum value appears in the year of 2003.Estimation results under four different climate change scenarios shows that in 2021-2050,water consumption of winter wheat would in-crease dramatically,and the increasing amount could reach to 6.2% for all the representative stations and all scenarios averagely.Yields in the future would decrease and some increase,and the variation rate would be 1.4% on average.The value of WUE would decrease 3.8% on average,meanwhile,the variability rate would also decrease.The increase of water consumption would be the main cause for WUE decreasing in the future.From the inter
Attitude Estimation or Quaternion Estimation?
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
Two New Relative Efficiency of the Parameter Estimate in the Growth Curve Model%生长曲线模型中参数估计的两种新的相对效率
Institute of Scientific and Technical Information of China (English)
段清堂; 归庆明
2001-01-01
This paper gives two new relative efficiency of parameter in the growth curve model. We give the lower bounds and relations of two new relative efficiency.%对生长曲线模型中参数估计的相对效率，本文提出了两种新的定义，并给出了新的相对效率的下界及讨论了它们之间的关系.
Estimation of Tobit Type Censored Demand Systems
DEFF Research Database (Denmark)
Barslund, Mikkel Christoffer
Recently a number of authors have suggested to estimate censored demand systems as a system of Tobit multivariate equations employing a Quasi Maximum Likelihood (QML) estimator based on bivariate Tobit models. In this paper I study the efficiency of this QML estimator relative to the...... asymptotically more efficient Simulated ML (SML) estimator in the context of a censored Almost Ideal demand system. Further, a simpler QML estimator based on the sum of univariate Tobit models is introduced. A Monte Carlo simulation comparing the three estimators is performed on three different sample sizes. The...... the use of simple etimators for more general censored systems of equations...
Measuring Residential Energy Efficiency Improvements with DEA
Grösche, Peter
2008-01-01
This paper measures energy efficiency improvements of US single-family homes between 1997 and 2001 using a two-stage procedure. In the first stage, an indicator of energy efficiency is derived by means of Data Envelopment Analysis (DEA), and the analogy between the DEA estimator and traditional measures of energy efficiency is demonstrated. The second stage employs a bootstrapped truncated regression technique to decompose the variation in the obtained efficiency estimates into a climatic com...
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially, with...
Bank Efficiency and Executive Compensation
Timothy King; Jonathan Williams
2013-01-01
We investigate whether handsomely rewarding bank executives’ realizes superior efficiency by determining if executive remuneration contracts produce incentives that offset potential agency problems and lead to improvements in bank efficiency. We calculate executive Delta and Vega to proxy executives’ risk-taking following changes in their compensation contracts and estimate their relationship with alternative profit efficiency. Our study uses novel instruments to account for the potentially e...
DEFF Research Database (Denmark)
Andersen, Rikke Sand; Vedsted, Peter
2015-01-01
institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....
Katsuya Takii
2004-01-01
This paper examines a particular aspect of entrepreneurship, namely firms' ability to respond appropriately to unexpected changes in the environment (i.e., their adaptability). An increase in firms' adaptability improves allocative efficiency in a competitive economy, but can reduce it when opportunities are distorted. It is shown that adaptability can aggravate distortions in the presence of political risk. Because efficiency affects the total factor productivity (TFP) of an economy, the mod...
Environmental Efficiency Analysis of China's Vegetable Production
Institute of Scientific and Technical Information of China (English)
TAO ZHANG; BAO-DI XUE
2005-01-01
Objective To analyze and estimate the environmental efficiency of China's vegetable production. Methods The stochastic translog frontier model was used to estimate the technical efficiency of vegetable production. Based on the estimated frontier and technical inefficiency levels, we used the method developed by Reinhard, et al.[1] to estimate the environmental efficiency. Pesticide and chemical fertilizer inputs were treated as environmentally detrimental inputs. Results From estimated results, the mean environmental efficiency for pesticide input was 69.7%, indicating a great potential for reducing pesticide use in China's vegetable production. In addition, substitution and output elasticities for vegetable farms were estimated to provide farmers with helpful information on how to reallocate input resources and improve efficiency. Conclusion There exists a great potential for reducing pesticide use in China's vegetable production.
Institute of Scientific and Technical Information of China (English)
刘承彬; 耿也; 舒奎; 高真香子
2012-01-01
RSA algorithms play an important role in the public key cryptography. Its computational efficiency have an immediately correlation with the efficiency of modular exponentiation implementation. In this paper the general formula for multiple primes of RSA algorithm were given by reducing the number of modular exponentiation, recover the original simply and fast. A formula for estimating efficiency also was given to calculate the efficiency of acceleration by estimating, which can provide the basis for the most appropriate numbers for RSA.%RSA算法在公钥密码体制中占有重要的地位,它的计算效率与模幂运算的实现效率有着直接关联.本实验在基于使用中国剩余定理简化的RSA解密算法的条件下,给出多个素数情况下的解密通用公式,通过减少大量的模幂运算,迅速简单地恢复出原文.并给出了效率提升估算公式,通过估算求出加速效率,为确定使用多少个素数最为合适提供依据.
Economics of appliance efficiency
International Nuclear Information System (INIS)
Several significant developments occurred in 2001 that affect the impact of market transformation programs. This paper presented and applied an econometric approach to the identification and estimation of market models for refrigerators, clothes washers, dishwashers and room air conditioners. The purpose of the paper was to understand the impact of energy conservation policy developments on sales of energy efficient appliances. The paper discussed the approach with particular reference to building a database of sales and drivers of sales using publicly available information; estimation of the determinants of sales using econometric models; and estimation of the individual impacts of prices, gross domestic product (GDP) and energy conservation policies on sales using regression results. Market and policy developments were also presented, such as change a light, save the world promotion; the California energy crisis; and the Pacific Northwest drought induced hydro power shortage. It was concluded that an increase in GDP increased the sales of both more efficient and less efficient refrigerators, clothes washers, dishwashers, and room air conditioners. An increase in electricity price increased sales of Energy Star refrigerators, clothes washers, dishwashers, and room air conditioners. 4 refs., 8 tabs.
Czech Academy of Sciences Publication Activity Database
Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr
2012-01-01
Roč. 70, č. 1 (2012), s. 315-323. ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775
DEFF Research Database (Denmark)
Stoustrup, Jakob; Niemann, H.
2002-01-01
This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems....
Emma María Martínez; Tomas Serafín Cuesta; Javier José Cancela
2012-01-01
The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP) and field capacity (FC), is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitabi...
HOQUE,Md. Azharul / SUZUKI,Keiichi / OIKAWA,Takuro
2007-01-01
A simulation study was performed for performance traits on 740 bulls and carcass traits on 1,774 progeny in Japanese Black cattle to compare the efficiency of direct and index selection. Performance traits included average daily gain (ADG), final body weight (BWF), metabolic body weight (MWT), feed intake (FI), feed conversion ratio (FCR) and residual feed intake (RFI). Progeny traits were carcass weight (CWT), rib eye area (REA), rib thickness (RBT), subcutaneous fat thickness (SFT), marblin...
Bland, Dan; Davis, Tom; Griffin, Sandy
1990-01-01
Space transportation avionics technology operational efficiency issues are presented in viewgraph form. Information is given on ascent flight design, autonomous spacecraft control, operations management systems, advanced mission control, telerobotics/telepresence, advanced software integration, advanced test/checkout systems, advanced training systems, and systems monitoring.
Efficient ICT for efficient smart grids
Smit, Gerard J.M.
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a prerequi
Uncertainly Analysis of Cryogenic Turbine Efficiency
Kanoglu, Mehmet
2000-01-01
A procedure for estimating uncertainty in the hydraulic efficiency of cryogenic turbines is presented. A case study is performed based on the test data from a cryogenic turbine testing facility. The effects of uncertainties in the measurements of temperature, pressure, and generator power on the turbine hydraulic efficiency are studied and the uncertainty in turbine efficiency is estimated to be ±0.20%. About 79% of the uncertainty is determined to come from the uncertainty in generator power...
Efficient ICT for efficient smart grids
Smit, Gerard J.M.
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a prerequisite for efficient Smart Grids. Unfortunately, efficiency and reliability have not always received the proper attention in the ICT domain in the past.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computation schemes using combinatorial properties of generating functions. We demonstrate our approach with several case studies involving real world movie recommendation data. The results are comparable with state-of-the-art techniques while also providing probabilistic preference estimates outside the scope of traditional recommender systems.
Power Quality Indices Estimation Platform
Directory of Open Access Journals (Sweden)
Eliana I. Arango-Zuluaga
2013-11-01
Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established.
Efficiency of Hospitals in the Czech Republic: Conditional Efficiency Approach
Šťastná Lenka; Votápková, Jana
2014-01-01
The paper estimates cost efficiency of 81 general hospitals in the Czech Republic during 2006-2010. We employ the conditional order-m approach which is a nonparametric method for efficiency computation accounting for environmental variables. Effects of environmental variables are assessed using the non-parametric significance test and partial regression plots. We find not-for-profit ownership and a presence of a specialized center in a hospital to be detrimental to hospital performance in the...
Joint DOA and DOD Estimation in Bistatic MIMO Radar without Estimating the Number of Targets
Directory of Open Access Journals (Sweden)
Zaifang Xi
2014-01-01
established without prior knowledge of the signal environment. In this paper, an efficient method for joint DOA and DOD estimation in bistatic MIMO radar without estimating the number of targets is presented. The proposed method computes an estimate of the noise subspace using the power of R (POR technique. Then the two-dimensional (2D direction finding problem is decoupled into two successive one-dimensional (1D angle estimation problems by employing the rank reduction (RARE estimator.
Energy Technology Data Exchange (ETDEWEB)
Dreger, U.; Lienesch, F.; Engel, U. [PTB-Fachlaboratorium ' Explosionsgeschuetzte Maschinen' (Germany)
2003-06-01
This report should give an overview on discussed methods as a comparision between procedures used presently. Taking into account GUM (Guide to the Expression of Uncertainty in Measurement) the influence of respective measuring uncertainties on estimated measuring parameters is concretely discussed. The GUM is the internationally applied guide for the declaration of measuring uncertainties and discusses the importance for the quality in measuring engineering. (GL) [German] Dieser Bericht soll einen Ueberblick ueber die zzt. diskutierten Methoden geben sowie den Vergleich zu den bislang ueblichen Verfahren ziehen. Dabei soll auch unter Beruecksichtigung des GUM (Guide to the Expression of Uncertainty in Measurement) der Einfluss jeweiliger Messunsicherheiten der bestimmenden Messgroessen konkret diskutiert werden. Der GUM ist der international angewendete Leitfaden zur Angabe der Messunsicherheit und diskutiert die Bedeutung fuer die Qualitaet in der Messtechnik. (orig.)
International Nuclear Information System (INIS)
World energy demand is constantly rising. This is a legitimate trend, insofar as access to energy enables enhanced quality of life and sanitation levels for populations. On the other hand, such increased consumption generates effects that may be catastrophic for the future of the planet (climate change, environmental imbalance), should this growth conform to the patterns followed, up to recent times, by most industrialized countries. Reduction of greenhouse gas emissions, development of new energy sources and energy efficiency are seen as the major challenges to be taken up for the world of tomorrow. In France, the National Energy Debate indeed emphasized, in 2003, the requirement to control both demand for, and offer of, energy, through a strategic orientation law for energy. The French position corresponds to a slightly singular situation - and a privileged one, compared to other countries - owing to massive use of nuclear power for electricity generation. This option allows France to be responsible for a mere 2% of worldwide greenhouse gas emissions. Real advances can nonetheless still be achieved as regards improved energy efficiency, particularly in the transportation and residential-tertiary sectors, following the lead, in this respect, shown by industry. These two sectors indeed account for over half of the country CO2 emissions (26% and 25% respectively). With respect to transportation, the work carried out by CEA on the hydrogen pathway, energy converters, and electricity storage has been covered by the preceding chapters. As regards housing, a topic addressed by one of the papers in this chapter, investigations at CEA concern integration of the various devices enabling value-added use of renewable energies. At the same time, the organization is carrying through its activity in the extensive area of heat exchangers, allowing industry to benefit from improved understanding in the modeling of flows. An activity evidenced by advances in energy efficiency for
Banking Efficiency in European Banking
Michaelidou, Vasiliki
2012-01-01
This research aims to unveil whether the tremendous banking sector reforms within the dynamic nature of the EU economic environment have achieved one of the catalyst EU goals, increase financial institutions’ cost efficiency. Using a single stage stochastic frontier approach, cost efficiency of the enlarged EU is estimated and evaluated for the period 2005-2011. The results suggest that EU pursuits have been successful but that stronger controls are needed.
Golbabaei-Asl, Mona; Knight, Doyle; Anderson, Kellie; Wilkinson, Stephen
2013-01-01
A novel method for determining the thermal efficiency of the SparkJet is proposed. A SparkJet is attached to the end of a pendulum. The motion of the pendulum subsequent to a single spark discharge is measured using a laser displacement sensor. The measured displacement vs time is compared with the predictions of a theoretical perfect gas model to estimate the fraction of the spark discharge energy which results in heating the gas (i.e., increasing the translational-rotational temperature). The results from multiple runs for different capacitances of c = 3, 5, 10, 20, and 40 micro-F show that the thermal efficiency decreases with higher capacitive discharges.
Solar bowl component efficiencies
Energy Technology Data Exchange (ETDEWEB)
O' Hair, E.A.; Green, B.L. (College of Engineering, Texas Tech. Univ., Lubbock, TX (United States))
1992-11-01
Battelle Pacific Northwest Laboratory has published two volumes on the economic evaluation of various proposed configurations and plant sizes for the four solar thermal technologies. These are the latest in a series of publications sponsored by the Department of Energy (DOE) on plant and operational costs and are more complete in that they include calculations of electrical output. These latest Battelle volumes use the 1976 solar data from Barstow, Calif., and by calculating or estimating the energy conversion efficiency of each element in the process from sun to electricity predict the output and cost of electricity from different plant sizes for each of the four technologies. In this paper a comparison is presented of the component efficiencies developed by Battelle and those of the solar bowl at Crosbyton, Tex.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2016-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
2010-01-01
... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND... § 305.5 Determinations of estimated annual energy consumption, estimated annual operating cost,...
The incredible shrinking covariance estimator
Theiler, James
2012-05-01
Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.
Motor-operator gearbox efficiency
International Nuclear Information System (INIS)
Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, we compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators we tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer
Motor-operated gearbox efficiency
Energy Technology Data Exchange (ETDEWEB)
DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.
1996-12-01
Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.
Demchuk, Pavlo
Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represent. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo simulation with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches. Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite
Institute of Scientific and Technical Information of China (English)
LIAO Yu-Iin; ZHENG Sheng-xian; RONG Xiang-min; LIU Qiang; FAN Mei-rong
2010-01-01
A pot experiment combined with15 N isotope techniques was conducted to evaluate effects of the varying rates of urea.N fertilizer application on yields,quailty,and nitrogen use efficiency(NUE)of pakchoi cabbage(Brassica chinensis L.)and asparagus lettuce(Lactuca saiva L.).15 N-labbled urea(5.35 15 N atom%)was added to pots with 6.5kg soil of 0.14,0.18,0.21,0.25,and 0.29 g N/kg soil.and applied in two splits:60 percenl as basel dressing in the mixture and 40 percent as toodressing.The fresh yields of two vegetable species increased with the increasing input of urea-N,but there was a significant quadratic relationship between the dose of urea-N fertilizer application and the fresh yields.Whan the dosage of urea-N fertilizer reached a certain value,nitrate readily accumulated in the two kinds of plants due to the decrease in NR activity;furthermore,there was a linear nagative correlation between nitrate content and NR activity.With the increasing input of urea-N.ascorbic acid and soluble sugar initially increased,declined after a while,and crude fiber rapidly decreased too.Total absorbed N(TAN).N derived from fertilizer(Ndff),and N derived from soil(Ndfs)increased,and the ratio of Ndff and TAN also increased.but the ratio of Ndfs and TAN as well as NUE of urea-N fertilizer decreased with the increasing input of urea-N.These results suggested that the increasing application of labeled N fertilizer led lo the increase in unlabeled N(namely,Ndfs)presumably due to"added nitrogen interaction"(ANI),the decease in NUE of urea-N fertilizer may be due to excess fertilization beyond the levels of plant requirements and the ANI.and the decrease jn the two vege table yields with the increasing addition of urea-N possibly because the excess accumulation of nitrate reached a toxic level.
Clustering Assisted Fundamental Matrix Estimation
Directory of Open Access Journals (Sweden)
Hao Wu
2015-03-01
Full Text Available In computer vision, the estimation of the fundament al matrix is a basic problem that has been extensively studied. The accuracy of the estimation imposes a significant influence on subsequent tasks such as the camera trajectory dete rmination and 3D reconstruction. In this paper we propose a new method for fundamental matri x estimation that makes use of clustering a group of 4D vectors. The key insight is the obser vation that among the 4D vectors constructed from matching pairs of points obtained from the SIF T algorithm, well-defined cluster points tend to be reliable inliers suitable for fundamenta l matrix estimation. Based on this, we utilizes a recently proposed efficient clustering method thr ough density peaks seeking and propose a new clustering assisted method. Experimental resul ts show that the proposed algorithm is faster and more accurate than currently commonly us ed methods.
A Practical Method to Estimate Entrepreneurship's Reward
Georgiou, Miltiades N.
2005-01-01
In the present note, an effort will be made for a contribution to the economic theory by introducing a practical method to estimate entrepreneurship's reward. As an example, a regression, based on the estimation of entrepreneurship's reward, with baning panel data will yield the same main results as in the article of Governance Structures, Efficiency and Firm Profitability, by E. E. Lehmann, S. Warning and J. Weigand, MPI, that firms with more efficient governance have higher profitability.
Directory of Open Access Journals (Sweden)
A. de la Casa
2011-06-01
Full Text Available La eficiencia en el uso de la radiación (EUR de un cultivo es la relación entre la materia seca producida y la radiación fotosintéticamente activa interceptada (RFAI durante su ciclo. La fracción de radiación fotosintéticamente activa interceptada (fRFAI puede determinarse mediante el método tradicional de Beer, a partir del índice de área foliar (iaf, o empleando la cobertura del suelo (f, como medida subrogante de fRFAI. Cuando el iaf en papa supera 3, el valor de fRFAI cambia muy poco, haciendo muy difícil detectar diferencias debidas a variaciones en las condiciones del cultivo. El objetivo de este trabajo fue determinar la EUR en papa (Solanum tuberosum L. cv. Spunta, comparando el uso de valores de iaf y de f para obtener fRFAI. El ensayo se realizó en el cinturón verde de Córdoba, Argentina, sobre un cultivo de ciclo tardío entre febrero y mayo de 2008. La utilización del valor de f produjo resultados que sobreestiman EUR, como consecuencia de una sistemática subestimación de fRFAI, mientras que tomando valores de fRFAI corregidos de acuerdo a la relación con f previamente establecida, EUR resultó similar a la obtenida con el método de referencia, que presentó un valor de 2,90 gr MJ-1 PAR.The radiation use efficiency (RUE of a crop is the relationship between dry matter produced and the intercepted photosynthetically active radiation (IPAR during the growth cycle. The IPAR can be determined by applying Beer’s traditional method, that uses leaf area index (LAI, or taking ground cover (f as a surrogate of IPAR. When potato LAI exceeds 3, the IPAR values change very little, making it very difficult to detect differences due to variations in crop conditions. The aim of this study was to determine the RUE in potato (Solanum tuberosum L. cv. Spunta, comparing the use of LAI and f values to obtain IPAR. The trial was conducted in the green belt of Cordoba, Argentina, on a late season crop from February to May 2008. The
Interactive inverse kinematics for human motion estimation
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome; Erleben, Kenny; Pedersen, Kim Steenstrup
We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion...... estimation system uses a single camera to estimate the motion of a human. The results show that inverse kinematics can significantly speed up the estimation process, while retaining a quality comparable to a full pose motion estimation system. Our novelty lies primarily in use of inverse kinematics to...
Discharge estimation based on machine learning
Directory of Open Access Journals (Sweden)
Zhu JIANG
2013-04-01
Full Text Available To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed: the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-nearest neighbor method is used to cluster new stage samples into the best-fit cluster. Finally, the daily discharge is estimated. In the estimation process, the interference of irrelevant information can be avoided, so that the precision and efficiency of daily discharge estimation are improved. Observed data from the Luding Hydrological Station were used for testing. The simulation results demonstrate that the precision of this method is high. This provides a new effective method for discharge estimation.
Discharge estimation based on machine learning
Institute of Scientific and Technical Information of China (English)
Zhu JIANG; Hui-yan WANG; Wen-wu SONG
2013-01-01
To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed:the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-nearest neighbor method is used to cluster new stage samples into the best-fit cluster. Finally, the daily discharge is estimated. In the estimation process, the interference of irrelevant information can be avoided, so that the precision and efficiency of daily discharge estimation are improved. Observed data from the Luding Hydrological Station were used for testing. The simulation results demonstrate that the precision of this method is high. This provides a new effective method for discharge estimation.
Rapid estimation of nonlinear DSGE models
Hall, Jamie
2012-01-01
This article describes a new approximation method for dynamic stochastic general equilibrium (DSGE) models. The method allows nonlinear models to be estimated efficiently and relatively quickly with the fully-adapted particle filter, without using high-performance parallel computation. The article demonstrates the method by estimating, on US data, a nonlinear New Keynesian model with time-varying volatility.
The Arbitrage Pricing Theory: Estimation and Applications
Chen, Nai-Fu
1980-01-01
The pricing equation of Ross' (1976) APT model is derived using estimable parameters. Estimation errors are discussed in the framework of elementary perturbation analysis. Theoretically, a simple link is provided among the mean-variance efficient set mathematics, mutual fund separations, discrete and continuous time CAPM, option pricing model, term structure of interest rate, capital budgeting, portfolio ranking, Modigliani Miller theorems with the APT.
Isobars and the efficient market hypothesis
Ivanková, Kristýna
2010-01-01
Isobar surfaces, a method for describing the overall shape of multidimensional data, are estimated by nonparametric regression and used to evaluate the efficiency of selected markets based on returns of their stock market indices.
Directory of Open Access Journals (Sweden)
Douglas Sampaio Henrique
2005-06-01
Full Text Available Data of 320 animals were obtained from eight comparative slaughter studies performed under tropical conditions and used to estimate the total efficiency of utilization of the metabolizable energy intake (MEI, which varied from 77 to 419 kcal kg-0.75d-1. The provided data also contained direct measures of the recovered energy (RE, which allowed calculating the heat production (HE by difference. The RE was regressed on MEI and deviations from linearity were evaluated by using the F-test. The respective estimates of the fasting heat production and the intercept and the slope that composes the relationship between RE and MEI were 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 and 0.37. Hence, the total efficiency was estimated by dividing the net energy for maintenance and growth by the metabolizable energy intake. The estimated total efficiency of the ME utilization and analogous estimates based on the beef cattle NRC model were employed in an additional study to evaluate their predictive powers in terms of the mean square deviations for both temperate and tropical conditions. The two approaches presented similar predictive powers but the proposed one had a 22% lower mean squared deviation even with its more simplified structure.Foram utilizadas 320 informações obtidas a partir de 8 estudos de abate comparativo conduzidos em condições tropicais para se estimar a eficiência total de utilização da energia metabolizável consumida, a qual variou de 77 a 419kcal kg-0.75d-1. Os dados também continham informações sobre a energia retida (RE, o que permitiu o cálculo da produção de calor por diferença. As estimativas da produção de calor em jejum e dos coeficientes linear e angular da regressão entre RE e MEI foram respectivamente, 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 e 0,37. Em seguida, a eficiência total foi estimada dividindo-se a energia líquida para mantença e produção pelo consumo de energia metabolizável. A eficiência total de
SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION
Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...
DELANNE, Y; VANDANJON, PO
2006-01-01
In board tire/road friction estimation is of current interest in two different framework : - optimization of driver assistance systems efficiency : antilock braking system, electronic stability program, adaptive cruise control, lane departure control, advanced automatic driving, etc., - driver instantaneous warning about the available friction and limits for his possible driving actions. This subject has been the objective of many research programs throughout the world. Four main methods have...
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Nonparametric Efficiency Analysis for Coffee Farms in Puerto Rico
Gregory, Alexandra; Featherstone, Allen M
2008-01-01
Coffee production in Puerto Rico is labor intensive since harvest is done by hand for quality and topography conditions. Färe's nonparametric approach was used to estimate technical, allocative, scale and overall efficiency measures for coffee farms in Puerto Rico during the 2000 to 2004 period. On average Puerto Rico coffee farms were 46% technically efficient, 79% scale efficient, and 74% allocatively efficient.
Efficient Learning for Undirected Topic Models
Gu, Jiatao; Li, Victor O. K.
2015-01-01
Replicated Softmax model, a well-known undirected topic model, is powerful in extracting semantic representations of documents. Traditional learning strategies such as Contrastive Divergence are very inefficient. This paper provides a novel estimator to speed up the learning based on Noise Contrastive Estimate, extended for documents of variant lengths and weighted inputs. Experiments on two benchmarks show that the new estimator achieves great learning efficiency and high accuracy on documen...
EFFICIENCY OF KRIGING ESTIMATION FOR SQUARE, TRIANGULAR, AND HEXAGONAL GRIDS
Although several researchers have pointed out some advantages and disadvantages of various soil sampling designs in the presence of spatial autocorrelation, a more detailed study is presented herein which examines the geometrical relationship of three sampling designs, namely the...
Efficiency estimation for permanent magnets of synchronous wind generators
Directory of Open Access Journals (Sweden)
Serebryakov A.
2014-02-01
Full Text Available Pastāvīgo magnētu pielietošana vējģeneratoros paver plašas iespējas mazas un vidējas jaudas vēja enerģētisko iekārtu (VEI efektivitātes paaugstināšanai. Turklāt samazinās ģeneratoru masa, palielinās drošums, samazinās ekspluatācijas izmaksas. Tomēr, izmantojot augsti enerģētiskos pastāvīgos magnētus ģeneratoros ar paaugstinātu jaudu, rodas virkne problēmu, kuras sekmīgi iespējams pārvarēt, ja pareizi izvieto magnētus pēc to orientācijas, radot magnētisko lauku elektriskās mašīnas gaisa spraugā. Darbā ir mēģināts pierādīt, ka eksistē būtiskas priekšrocības mazas un vidējas jaudas vējģeneratoros, ja pastāvīgie magnēti tiek magnetizēti tangenciāli attiecībā pret gaisa spraugu.
Estimating the Efficiency of Sequels in the Film Industry
Denis Y. Orlov; Evgeniy M. Ozhegov
2015-01-01
Film industry has been under investigation from social scientists for the last 30 years. A lot of the work has been dedicated to the analysis of the sequel effect on film revenue. The current paper employs data on wide releases in the US from 2010 to 2014 and provides a new look at sequel return to the domestic box office. We apply the Heckman and nonparametric sample selection approach in order to control for the non-random nature of the sequels’ sample. It was found that sequels are success...
Estimation of efficiency of damping parameters in seismic insulation systems
Yu.L. Rutman; N.V. Kovaleva
2012-01-01
In the design of seismic isolation systems, one of the key and most difficult issues is the damping optimal parameters choice. If the damping is negligible, it is possible (at a certain frequency of external influence) that quasi-resonant processes, which lead to the disappearance of seismic insulation effect, will emerge. If the damping forces are large, it entails a significant load increase on the protected object, which also reduces the effect of seismic insulation.Development technique o...
Fast Katz and commuters : efficient estimation of social relatedness.
Energy Technology Data Exchange (ETDEWEB)
On, Byung-Won; Lakshmanan, Laks V. S.; Esfandiar, Pooya; Bonchi, Francesco; Grief, Chen; Gleich, David F.
2010-12-01
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Using MCMC chain outputs to efficiently estimate Bayes factors
Morey, Richard D.; Rouder, Jeffrey N.; Pratte, Michael S.; Speckman, Paul L.
2011-01-01
One of the most important methodological problems in psychological research is assessing the reasonableness of null models, which typically constrain a parameter to a specific value such as zero. Bayes factor has been recently advocated in the statistical and psychological literature as a principled
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G. [ICF Incorporated, LLC., Fairfax, VA (United States)
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies, other relevant attributes based on data from actual production vehicles, and recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G.
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies? other relevant attributes based on data from actual production vehicles and from recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Stochastic Frontier Estimation of Efficient Learning in Video Games
Hamlen, Karla R.
2012-01-01
Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…
Efficient topology estimation for large scale optical mapping
Elibol, Armagan
2011-01-01
Large scale image mosaicing methods are in great demand among scientists who study diﬀerent aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seaﬂoor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajecto...
Efficient and Accurate Path Cost Estimation Using Trajectory Data
Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.
2015-01-01
Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...
Scalable Ensemble Learning and Computationally Efficient Variance Estimation
LeDell, Erin
2015-01-01
Ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm is an ensemble method that has been theoretically proven to represent an asymptotically optimal system for learning. The Super Learner, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although...
Managerial Efficiency and Hospitality Industry: the Portuguese Case
Barros, Carlos Pestana; Botti, Laurent; Peypoch, Nicolas; Solonandrasana, Bernardin
2009-01-01
Abstract In this paper, the innovative two-stage procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Portuguese hotel groups from 1998 to 2005. In the first stage, the hotels' technical efficiency is estimated with DEA in order to establish which hotels have the most efficient performance. These could serve as peers to help improve performance of the least efficient hotels. In the second stage, the Simar and Wilson model is used to bootstrap the ...
U.S. CHAIN RESTAURANT EFFICIENCY
Barber, David L.; Byrne, Patrick J.
1997-01-01
The growth of corporate food service firms and the resulting competition places increasing pressures on available resources and their efficient usage. This analysis measures efficiencies for U. S. chain restaurants and determines associations between managerial and operational characteristics. Using a ray-homothetic production function, frontiers were estimated for large and small restaurant chains. Technical and scale efficiencies were then derived for the firms. Finally, a Tobit analysis me...
Efficient formulas for efficiency correction of cumulants
Kitazawa, Masakiyo
2016-01-01
We derive formulas which connect cumulants of particle numbers observed with efficiency losses with the original ones based on the binomial model. These formulas can describe the case with multiple efficiencies in a compact form. Compared with the presently suggested ones based on factorial moments, these formulas would drastically reduce the numerical cost for efficiency corrections when the order of the cumulant and the number of different efficiencies are large. The efficiency correction with realistic $p_T$-dependent efficiency would be carried out with the aid of these formulas.
Estimating Coke and Pepsi's price and advertising strategies
Golan, Amos; Karp , Larry S.; Perloff, Jeffrey M.
1999-01-01
A semi-parametric, information-based estimator is used to estimate strategies in prices and advertising for Coca-Cola and Pepsi-Cola. Separate strategies for each firm are estimated with and without restrictions from game theory. These information/entropy estimators are consistent, are efficient, and do not require distributional assumptions. These estimates are used to test theories about the strategies of firms and to see how changes in incomes or factor prices affect these strategies.
Ensemble estimators for multivariate entropy estimation
Sricharan, Kumar
2012-01-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the exponent in the MSE rate of convergence decays increasingly slowly as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\\gamma}/{d}})$, where $T$ is the number of samples, and $\\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-NN density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted convex combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by so...
Energy-efficient cooking methods
Energy Technology Data Exchange (ETDEWEB)
De, Dilip K. [Department of Physics, University of Jos, P.M.B. 2084, Jos, Plateau State (Nigeria); Muwa Shawhatsu, N. [Department of Physics, Federal University of Technology, Yola, P.M.B. 2076, Yola, Adamawa State (Nigeria); De, N.N. [Department of Mechanical and Aerospace Engineering, The University of Texas at Arlington, Arlington, TX 76019 (United States); Ikechukwu Ajaeroh, M. [Department of Physics, University of Abuja, Abuja (Nigeria)
2013-02-15
Energy-efficient new cooking techniques have been developed in this research. Using a stove with 649{+-}20 W of power, the minimum heat, specific heat of transformation, and on-stove time required to completely cook 1 kg of dry beans (with water and other ingredients) and 1 kg of raw potato are found to be: 710 {+-}kJ, 613 {+-}kJ, and 1,144{+-}10 s, respectively, for beans and 287{+-}12 kJ, 200{+-}9 kJ, and 466{+-}10 s for Irish potato. Extensive researches show that these figures are, to date, the lowest amount of heat ever used to cook beans and potato and less than half the energy used in conventional cooking with a pressure cooker. The efficiency of the stove was estimated to be 52.5{+-}2 %. Discussion is made to further improve the efficiency in cooking with normal stove and solar cooker and to save food nutrients further. Our method of cooking when applied globally is expected to contribute to the clean development management (CDM) potential. The approximate values of the minimum and maximum CDM potentials are estimated to be 7.5 x 10{sup 11} and 2.2 x 10{sup 13} kg of carbon credit annually. The precise estimation CDM potential of our cooking method will be reported later.
Directory of Open Access Journals (Sweden)
Aleksandra Y. Grigorevskaya
2012-05-01
Full Text Available The article deals with methods of comprehensive restaurant business performance assessment based on the estimation of both subtotal and total rates of efficiency, and demonstrates the calculation of the above rates.
Energy Technology Data Exchange (ETDEWEB)
Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman
2004-03-30
Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed
Efficiency of municipal legislative chambers
Directory of Open Access Journals (Sweden)
Alexandre Manoel Angelo da Silva
2015-01-01
Full Text Available A novel study of Brazilian city council efficiency using the non-parametric estimator FDH (free disposal hull with bias correction is presented. In regional terms, study results show a concentration of efficient councils in the southern region. In turn, those in the northeastern and southeastern regions are among the most ineffective councils. In these latter two regions, most councils could at least double their outputs while maintaining the same volume of inputs. Regarding population size, for cities with up to 500,000 inhabitants, more than 60% of city councils could at least quadruple their output. Regarding inefficiencies revealed through non-discretionary variables (environmental variables, the study results show a correlation between councilor education levels and city council efficiency.
Efficiency of Finish power transmission network companies
International Nuclear Information System (INIS)
The Finnish Energy Market Authority has investigated the efficiency of power transmissions network companies. The results show that the intensification potential of the branch is 402 million FIM, corresponding to about 15% of the total costs of the branch and 7.3 % of the turnout. Energy Market Authority supervises the reasonableness of the power transmission prices, and it will use the results of the research in supervision. The research was carried out by the Quantitative Methods Research Group of Helsinki School of Economics. The main objective of the research was to create an efficiency estimation method for electric power distribution network business used for Finnish conditions. Data of the year 1998 was used as basic material in the research. Twenty-one of the 102 power distribution network operators was estimated to be totally efficient. Highest possible efficiency rate was 100, and the average of the efficiency rates of all the operators was 76.9, the minimum being 42.6
A Bistochastic Nonparametric Estimator
Juan Gabriel Rodríguez; Rafael Salas
2004-01-01
We explore the relevance of adopting a bistochastic nonparametric estimator. This estimator has two main implications. First, the estimator reduces variability according to the robust criterion of second-order stochastic (and Lorenz) dominance. This is a universally criterion in risk and welfare economics, which expands the applicability of nonparametric estimation in economics, for instance to the measurement of economic discrimination. Second, the bistochastic estimator produces smaller err...
ON INTERVAL ESTIMATING REGRESSION
Directory of Open Access Journals (Sweden)
Marcin Michalak
2014-06-01
Full Text Available This paper presents a new look on the well-known nonparametric regression estimator – the Nadaraya-Watson kernel estimator. Though it was invented 50 years ago it still being applied in many fields. After these yearsfoundations of uncertainty theory – interval analysis – are joined with this estimator. The paper presents the background of Nadaraya-Watson kernel estimator together with the basis of interval analysis and shows the interval Nadaraya-Watson kernel estimator.
More evidence of rational market values for home energy efficiency
Nevin, Rick; Bender, Christopher; Gazan, Heather
1999-01-01
The “cost versus value” survey by Remodeling indicates that realtor value estimates for window replacement can be substantially explained by the market value of energy efficiency, as estimated in “Evidence of Rational Market Values for Home Energy Efficiency,” which appeared in the October 1998 issue of The Appraisal Journal.
Distributed fusion estimation for sensor networks with communication constraints
Zhang, Wen-An; Song, Haiyu; Yu, Li
2016-01-01
This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.
Empirical likelihood estimation of discretely sampled processes of OU type
Institute of Scientific and Technical Information of China (English)
SUN ShuGuang; ZHANG XinSheng
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
Estimating a Mixed Strategy: United and American Airlines
Golan, Amos; Karp , Larry S.; Perloff, Jeffrey M.
1998-01-01
We develop a generalized maximum entropy estimator that can estimate pure and mixed strategies subject to restrictions from game theory. This method avoids distributional assumptions and is consistent and efficient. We demonstrate this method by estimating the mixed strategies of duopolistic airlines.
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
Nonparametric Filament Estimation
Genovese, Christopher R; verdinelli, Isabella; Wasserman, Larry
2010-01-01
We develop nonparametric methods for estimating filamentary structure from planar point process data and find the minimax lower bound for this problem. We show that, under weak conditions, the filaments have a simple geometric representation as the medial axis of the data distribution's support. Our methods convert an estimator of the support's boundary into an estimator of the filaments. We find the rates of convergence of our estimators and show that when using an optimal boundary estimator, they achieve the minimax rate. Our work can be regarded as providing a solution to the manifold learning problem as well as being a new approach to principal curve estimation.
Indicators of technological processes environmental estimation
R. Nowosielski; A. Kania; M. Spilka
2007-01-01
Purpose: The paper presents a possibility using indicators of technological processes estimation which make possible decrease negative environmental influence these processes on the environment. Design/methodology/approach: The article shows the direction of enterprises efficiency estimation in favour of environment. It also presents the necessity of production with responsibility of environment. It requires formulating definite aims and justification of workers to integration of the environm...
Discharge estimation based on machine learning
Jiang, Zhu; Wang, Hui-yan; Wen-wu SONG
2013-01-01
To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed: the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-near...
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
International Nuclear Information System (INIS)
Highlights: ► We employ a slacks-based DEA model to estimate the energy efficiency and shadow prices of CO2 emissions in China. ► The empirical study shows that China was not performing CO2-efficiently. ► The average of estimated shadow prices of CO2 emissions is about $7.2. -- Abstract: This paper uses nonparametric efficiency analysis technique to estimate the energy efficiency, potential emission reductions and marginal abatement costs of energy-related CO2 emissions in China. We employ a non-radial slacks-based data envelopment analysis (DEA) model for estimating the potential reductions and efficiency of CO2 emissions for China. The dual model of the slacks-based DEA model is then used to estimate the marginal abatement costs of CO2 emissions. An empirical study based on China’s panel data (2001–2010) is carried out and some policy implications are also discussed.
Efficiency of broadband internet adoption in European Union member states
Pavlyuk, Dmitry
2011-01-01
This paper is devoted to econometric analysis of broadband adoption efficiency in EU member states. Stochastic frontier models are widely used for efficiency estimation. We enhanced the stochastic frontier model by adding a spatial component into the model specification to reflect possible dependencies between neighbour countries. A maximum likelihood estimator for the model was developed. The proposed spatial autoregressive stochastic frontier model is used for estimation of broadband ad...
Software Cost Estimation Review
Ongere, Alphonce
2013-01-01
Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...
Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The genome length is a fundamental feature of a species. This note outlined the general concept and estimation method of the physical and genetic length. Some formulae for estimating the genetic length were derived in detail. As examples, the genome genetic length of Pinus pinaster Ait. and the genetic length of chromosome Ⅵ of Oryza sativa L. were estimated from partial linkage data.
Aggregation of Scale Efficiency
Valentin Zelenyuk
2012-01-01
In this article we extend the aggregation theory in efficiency and productivity analysis by deriving solutions to the problem of aggregation of individual scale efficiency measures, primal and dual, into aggregate primal and dual scale efficiency measures of a group. The new aggregation result is coherent with aggregation framework and solutions for the other related efficiency measures that already exist in the literature.
Efficiency wages and bargaining
Walsh, Frank
2005-01-01
I argue that in contrast to the literature to date efficiency wage and bargaining solutions will typically be independent. If the bargained wage satisfies the efficiency wage constraint efficiency wages are irrelevant. If it does not, typically we have the efficiency wage solution and bargaining is irrelevant.
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...
Estimating state-contingent production functions
DEFF Research Database (Denmark)
Rasmussen, Svend; Karantininis, Kostas
The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may be...... useful, but that further analysis is needed to evaluate the efficiency of this estimation method compared to traditional methods....
Unified definition of a class of Monte Carlo estimators
International Nuclear Information System (INIS)
A unified definition of a wide class of Monte Carlo reaction rate estimators is presented, since most commonly used estimators belong to that class. The definition is given through an integral transformation of an arbitrary estimator of the class. Since the transformation contains an arbitrary function, in principle an infinite number of new estimators can be defined on the basis of one known estimator. It is shown that the most common estimators belonging to the class, such as the track-length and expectation estimators, are special cases of transformation, corresponding to the simplest transformation kernels when transforming the usual collision estimator. A pair of new estimators is defined and their variances are compared to the variance of the expectation estimator. One of the new estimators, called the trexpectation estimator, seems to be appropriate for flux-integral estimation in moderator regions. The other one, which uses an intermediate estimation of the final result and is therefore called the self-improving estimator, always yields a lower variance than the expectation estimator. As is shown, this estimator approximates well to possibly the best estimator of the class. Numerical results are presented for the simplest geometries, and these results indicate that for absorbers that are not too strong, in practical cases the standard deviation of the self-improving estimator is less than that of the expectation estimator by more than 10%. The experiments also suggest that the self-improving estimator is always superior to the track-length estimator as well, i.e., that it is the best of all known estimators belonging to the class. In the Appendices, for simplified cases, approximate conditions are given for which the trexpectation and track-length estimators show a higher efficiency than the expectation estimator
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Efficiency in higher education
Duguleană, C.; Duguleană, L.
2011-01-01
The National Education Law establishes the principles of equity and efficiency in higher education. The concept of efficiency has different meanings according to the types of funding and the time horizons: short or long term management approaches. Understanding the black box of efficiency may offer solutions for an effective activity. The paper presents the parallel presentation of efficiency in a production firm and in a university, for better understanding the specificities of efficiency in...
Hardware Accelerated Power Estimation
Coburn, Joel; Raghunathan, Anand
2011-01-01
In this paper, we present power emulation, a novel design paradigm that utilizes hardware acceleration for the purpose of fast power estimation. Power emulation is based on the observation that the functions necessary for power estimation (power model evaluation, aggregation, etc.) can be implemented as hardware circuits. Therefore, we can enhance any given design with "power estimation hardware", map it to a prototyping platform, and exercise it with any given test stimuli to obtain power consumption estimates. Our empirical studies with industrial designs reveal that power emulation can achieve significant speedups (10X to 500X) over state-of-the-art commercial register-transfer level (RTL) power estimation tools.
A New DOA Estimation Method Using a Circular Microphone Array
Karbasi, Amin; SUGIYAMA, AKIHIKO
2007-01-01
This paper proposes a new DOA (direction of arrival) estimation method based on circular microphone array. For an arbitrary number of microphones, it is analytically shown that DOA estimation reduces to an efficient non-linear optimization problem. Simulation results demonstrate that deviation of the estimation error for 20 and 10 dB SNR is smaller than 0.7 degree which is comparable to high resolution DOA estimation methods. A larger number of microphones provide a more ...
The energy efficiency of lead selfsputtering
DEFF Research Database (Denmark)
Andersen, Hans Henrik
1968-01-01
The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics......The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics...
Efficiency of hospitals in the Czech Republic
Procházková, Jana; Šťastná, Lenka
2011-01-01
The paper estimates cost efficiency of 99 general hospitals in the Czech Republic during 2001-2008 using Stochastic Frontier Analysis. We estimate a baseline model and also a model accounting for various inefficiency determinants. Group-specific inefficiency is present even having taken care of a number of characteristics. We found that inefficiency increases with teaching status, more than 20,000 treated patients a year, not-for-profit status and a larger share of the elderly in the municipa...
Waldo, Staffan
2007-01-01
While individual data form the base for much empirical analysis in education, this is not the case for analysis of technical efficiency. In this paper, efficiency is estimated using individual data which is then aggregated to larger groups of students. Using an individual approach to technical efficiency makes it possible to carry out studies on a…
Microfinance, Efficiency and Agricultural Production in Bangladesh
Islam, K. M. Zahidul
2011-01-01
The objectives of this study were to make a detailed and systematic empirical analysis of microfinance borrowers and non-borrowers in Bangladesh and also examine how efficiency measures are influenced by the access to agricultural microfinance. In the empirical analysis, this study used both parametric and non-parametric frontier approaches to investigate differences in efficiency estimates between microfinance borrowers and non-borrowers. This thesis, based on five articles, applied data obt...
Technical Efficiency in Louisiana Sugar Cane Processing
Johnson, Jason L.; Zapata, Hector O.; Heagler, Arthur M.
1995-01-01
Participants in the Louisiana sugar cane industry have provided little information related to the efficiency of sugar processing operations. Using panel data from the population of Louisiana sugar processors, alternative model specifications are estimated using stochastic frontier methods to measure the technical efficiency of individual sugar factories. Results suggest the Louisiana sugar processing industry is characterized by a constant returns to scale Cobb-Douglas processing function wit...
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exponents in hidden Markov model order estimation(2003)). We propose an algorithm to efficiently compute the estimator and provide simulation studies to support our result.
Golbabaei-Asl, M.; Knight, D.; Wilkinson, S.
2013-01-01
The thermal efficiency of a SparkJet is evaluated by measuring the impulse response of a pendulum subject to a single spark discharge. The SparkJet is attached to the end of a pendulum. A laser displacement sensor is used to measure the displacement of the pendulum upon discharge. The pendulum motion is a function of the fraction of the discharge energy that is channeled into the heating of the gas (i.e., increasing the translational-rotational temperature). A theoretical perfect gas model is used to estimate the portion of the energy from the heated gas that results in equivalent pendulum displacement as in the experiment. The earlier results from multiple runs for different capacitances of C = 3, 5, 10, 20, and 40(micro)F demonstrate that the thermal efficiency decreases with higher capacitive discharges.1 In the current paper, results from additional run cases have been included and confirm the previous results
The Efficiency of Educational Production
DEFF Research Database (Denmark)
Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben
graduation/completion rates and expected earnings after completed education. We use Data Envelopment Analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications in......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form of...... order to analyse different aspects of efficiency. In purely quantitative models (where inputs and outputs are expenditure and number of students at different levels of the educational system) and in models where graduation or completion rates are included as an indicator of output quality, Finland is...
The Efficiency of Educational Production
DEFF Research Database (Denmark)
Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben
2015-01-01
graduation/completion rates and expected earnings after completed education. We use data envelopment analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications in......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form of...... order to analyse different aspects of efficiency. In purely quantitative models (where inputs and outputs are expenditure and number of students at different levels of the educational system) and in models where graduation or completion rates are included as indicators of output quality, Finland is the...
Hopf limit cycles estimation in power systems
Energy Technology Data Exchange (ETDEWEB)
Barquin, J.; Gomez, T.; Pagola, L.F. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica
1995-11-01
This paper addresses the computation of the Hopf limit cycle. This limit cycle is associated to the appearance of an oscillatory instability in dynamical power systems. An algorithm is proposed to estimate the dimensions and shape of this limit cycle. The algorithm is computationally efficient and is able to deal with large power systems. 7 refs, 4 figs, 1 tab
On Frequency Domain Models for TDOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll;
2015-01-01
much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number of...
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the...
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
In this paper, we propose using realized range-based estimation to draw inference about the quadratic variation of jump-diffusion processes. We also construct a new test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the test...
Efficiency analysis of betavoltaic elements
Sachenko, A. V.; Shkrebtii, A. I.; Korkishko, R. M.; Kostylyov, V. P.; Kulish, M. R.; Sokolovskyi, I. O.
2015-09-01
The conversion of energy of electrons produced by a radioactive β-source into electricity in a Si and SiC p-n junctions is modeled. The features of the generation function that describes the electron-hole pair production by an electron flux and the emergence of a "dead layer" are discussed. The collection efficiency Q that describes the rate of electron-hole pair production by incident beta particles, is calculated taking into account the presence of the dead layer. It is shown that in the case of high-grade Si p-n junctions, the collection efficiency of electron-hole pairs created by a high-energy electrons flux (such as, e.g., Pm-147 beta flux) is close or equal to unity in a wide range of electron energies. For SiC p-n junctions, Q is near unity only for electrons with relatively low energies of about 5 keV (produced, e.g., by a tritium source) and decreases rapidly with further increase of electron energy. The conditions, under which the influence of the dead layer on the collection efficiency is negligible, are determined. The open-circuit voltage is calculated for realistic values of the minority carriers' diffusion coefficients and lifetimes in Si and SiC p-n junctions, irradiated by a high-energy electrons flux. Our calculations allow to estimate the attainable efficiency of betavoltaic elements.
Embedding capacity estimation of reversible watermarking schemes
Indian Academy of Sciences (India)
Rishabh Iyer; Rushikesh Borse; Subhasis Chaudhuri
2014-12-01
Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm iteratively updates the co-occurrence matrix at every stage to estimate the multi-pass embedding capacity, and is much more efficient vis-a-vis actual watermarking. We also suggest an extremely efficient, pre-computable tree based implementation which is conceptually similar to the cooccurrence based method, but provides the estimates in a single iteration, requiring a complexity akin to that of single pass capacity estimation. We also provide upper bounds on the embedding capacity.We finally evaluate performance of our algorithms on recent watermarking algorithms.
Coordination of Energy Efficiency and Demand Response
Energy Technology Data Exchange (ETDEWEB)
Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison
2010-01-29
This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.
Modified Maximum Likelihood Estimation from Censored Samples in Burr Type X Distribution
Directory of Open Access Journals (Sweden)
R.R.L. Kantam
2015-12-01
Full Text Available The two parameter Burr type X distribution is considered and its scale parameter is estimated from a censored sample using the classical maximum likelihood method. The estimating equations are modified to get simpler and efficient estimators. Two methods of modification are suggested. The small sample efficiencies are presented.
Pose estimation and frontal face detection for face recognition
Lim, Eng Thiam; Wang, Jiangang; Xie, Wei; Ronda, Venkarteswarlu
2005-05-01
This paper proposes a pose estimation and frontal face detection algorithm for face recognition. Considering it's application in a real-world environment, the algorithm has to be robust yet computationally efficient. The main contribution of this paper is the efficient face localization, scale and pose estimation using color models. Simulation results showed very low computational load when compare to other face detection algorithm. The second contribution is the introduction of low dimensional statistical face geometrical model. Compared to other statistical face model the proposed method models the face geometry efficiently. The algorithm is demonstrated on a real-time system. The simulation results indicate that the proposed algorithm is computationally efficient.
Estimating Function Approaches for Spatial Point Processes
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting
Air transportation energy efficiency
Williams, L. J.
1977-01-01
The energy efficiency of air transportation, results of the recently completed RECAT studies on improvement alternatives, and the NASA Aircraft Energy Efficiency Research Program to develop the technology for significant improvements in future aircraft were reviewed.
Generalized Agile Estimation Method
Directory of Open Access Journals (Sweden)
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Heemstra, F.J.
1992-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be estimated? (4) What can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimatio...
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Efficient flapping flight of pterosaurs
Strang, Karl Axel
In the late eighteenth century, humans discovered the first pterosaur fossil remains and have been fascinated by their existence ever since. Pterosaurs exploited their membrane wings in a sophisticated manner for flight control and propulsion, and were likely the most efficient and effective flyers ever to inhabit our planet. The flapping gait is a complex combination of motions that sustains and propels an animal in the air. Because pterosaurs were so large with wingspans up to eleven meters, if they could have sustained flapping flight, they would have had to achieve high propulsive efficiencies. Identifying the wing motions that contribute the most to propulsive efficiency is key to understanding pterosaur flight, and therefore to shedding light on flapping flight in general and the design of efficient ornithopters. This study is based on published results for a very well-preserved specimen of Coloborhynchus robustus, for which the joints are well-known and thoroughly described in the literature. Simplifying assumptions are made to estimate the characteristics that can not be inferred directly from the fossil remains. For a given animal, maximizing efficiency is equivalent to minimizing power at a given thrust and speed. We therefore aim at finding the flapping gait, that is the joint motions, that minimize the required flapping power. The power is computed from the aerodynamic forces created during a given wing motion. We develop an unsteady three-dimensional code based on the vortex-lattice method, which correlates well with published results for unsteady motions of rectangular wings. In the aerodynamic model, the rigid pterosaur wing is defined by the position of the bones. In the aeroelastic model, we add the flexibility of the bones and of the wing membrane. The nonlinear structural behavior of the membrane is reduced to a linear modal decomposition, assuming small deflections about the reference wing geometry. The reference wing geometry is computed for
Barriers to Industrial Energy Efficiency - Report to Congress, June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This report examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This report also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
EFFICIENCY ASSESSMENT OF THE PERSONNEL AUDIT AT THE MODERN ENTERPRISE
E. V. Maslova; Tsvetkova, E. V.
2015-01-01
The article considers the main criteria in assessment of ensuring efficiency of carrying out personnel audit, structure of auditor risks when carrying out personnel audit, and also a complex technique of efficiency assessment of personnel audit on the basis of three methodical approaches: a method of expert estimation, a method of economic efficiency assessment with application of the factorial analysis and a rang method of the efficiency assessment of the administrative processes.
Barriers to Industrial Energy Efficiency - Study (Appendix A), June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This study examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This study also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
Gardiner, John Corby
The electric power industry market structure has changed over the last twenty years since the passage of the Public Utility Regulatory Policies Act (PURPA). These changes include the entry by unregulated generator plants and, more recently, the deregulation of entry and price in the retail generation market. Such changes have introduced and expanded competitive forces on the incumbent electric power plants. Proponents of this deregulation argued that the enhanced competition would lead to a more efficient allocation of resources. Previous studies of power plant technical and allocative efficiency have failed to measure technical and allocative efficiency at the plant level. In contrast, this study uses panel data on 35 power plants over 59 years to estimate technical and allocative efficiency of each plant. By using a flexible functional form, which is not constrained by the assumption that regulation is constant over the 59 years sampled, the estimation procedure accounts for changes in both state and national regulatory/energy policies that may have occurred over the sample period. The empirical evidence presented shows that most of the power plants examined have operated more efficiently since the passage of PURPA and the resultant increase of competitive forces. Chapter 2 extends the model used in Chapter 1 and clarifies some issues in the efficiency literature by addressing the case where homogeneity does not hold. A more general model is developed for estimating both input and output inefficiency simultaneously. This approach reveals more information about firm inefficiency than the single estimation approach that has previously been used in the literature. Using the more general model, estimates are provided on the type of inefficiency that occurs as well as the cost of inefficiency by type of inefficiency. In previous studies, the ranking of firms by inefficiency has been difficult because of the cardinal and ordinal differences between different types of
Reconsidering energy efficiency
International Nuclear Information System (INIS)
Energy and environmental policies are reconsidering energy efficiency. In a perfect market, rational and well informed consumers reach economic efficiency which, at the given prices of energy and capital, corresponds to physical efficiency. In the real world, market failures and cognitive frictions distort the consumers from perfectly rational and informed choices. Green incentive schemes aim at balancing market failures and directing consumers toward more efficient goods and services. The problem is to fine tune the incentive schemes
Recursive and Fast Recursive Capon Spectral Estimators
Directory of Open Access Journals (Sweden)
Yiteng (Arden Huang
2007-01-01
Full Text Available The Capon algorithm, which was originally proposed for wavenumber estimation in array signal processing, has become a powerful tool for spectral analysis. Over several decades, a significant amount of research attention has been devoted to the estimation of the Capon spectrum. Most of the developed algorithms thus far, however, rely on the direct computation of the inverse of the input correlation (or covariance matrix, which can be computationally very expensive particularly when the dimension of the matrix is large. This paper deals with fast and efficient algorithms in computing the Capon spectrum. Inspired from the recursive idea established in adaptive signal processing theory, we first derive a recursive Capon algorithm. This new algorithm does not require an explicit matrix inversion, and hence it is more efficient to implement than the direct-inverse approach. We then develop a fast version of the recursive algorithm based on techniques used in fast recursive least-squares adaptive algorithms. This new fast algorithm can further reduce the complexity of the recursive Capon algorithm by an order of magnitude. Although our focus is on the Capon spectral estimation, the ideas shown in this paper can also be generalized and applied to other applications. To illustrate this, we will show how to apply the recursive idea to the estimation of the magnitude squared coherence function, which plays an important role for problems like time-delay estimation, signal-to-noise ratio estimation, and doubletalk detection in echo cancellation.
DETERMINANTS OF TECHNICAL EFFICIENCY ON PINEAPPLE FARMING
Directory of Open Access Journals (Sweden)
Nor Diana Mohd Idris
2013-01-01
Full Text Available This study analyzes the pineapple production efficiency of the Integrated Agricultural Development Project (IADP in Samarahan, Sarawak, Malaysia and also studies its determinants. In the study area, IADP plays an important role in rural development as a poverty alleviation program through agricultural development. Despite the many privileges received by the farmers, especially from the government, they are still less efficient. This study adopts the Data Envelopment Analysis (DEA in measuring technical efficiency. Further, this study aims to examine the determinants of efficiency by estimating the level of farmer characteristics as a function of farmerâs age, education level, family labor, years of experience in agriculture, society members and farm size. The estimation used the Tobit Model. The results from this study show that the majority of farmers in IADP are still less efficient. In addition, the results show that relying on family labor, the years of experience in agriculture and also participation as the associationâs member are all important determinants of the level of efficiency for the IADP farmers in the agricultural sector. Increasing agriculture productivity can also guarantee the achievement of a more optimal sustainable living in an effort to increase the farmersâ income. Such information is valuable for extension services and policy makers since it can help to guide policies toward increased efficiency among pineapple farmers in Malaysia.
Energy efficiency; Efficacite energetique
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-06-15
This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)
BANKING INDUSTRY, MARKET STRUCTURE AND EFFICIENCY: THE REVISITED MODEL TO INTERMEDIARY HYPOTHESES
Sami Mensi
2011-01-01
The object of this article is to propose a new conception of the structure-conductperformance/ efficient structure relationship. Alongside standard hypotheses, we retain two intermediary hypotheses, named modified efficient structure hypothesis and hybrid efficiency/collusion hypothesis. The models are estimated using a random effects estimating procedure over a sample of Tunisian commercial banks during the period 1990-2005. The results about the variable efficiency cannot reject the efficie...
Waterproofing of facades - an efficient measure
Energy Technology Data Exchange (ETDEWEB)
Franke, L.; Kittl, R.; Witt, S.
1987-09-01
A discussion follows on the possibilities to estimate the water absorption of masonry facades exposed to pelting rain and the subsequent drying behaviour. In addition, proposals are given on how to check the efficiency of waterproofing carried out at masonry facades, particularly with regard to joint permeabilities which might still exist.
On Local and Nonlocal Measures of Efficiency
Kallenberg, Wilbert C.M.; Ledwina, Teresa
1987-01-01
General results on the limiting equivalence of local and nonlocal measures of efficiency are obtained. Why equivalence occurs in so many testing and estimation problems is clarified. Uniformity of the convergence is a key point. The concepts of Frechet- and Hadamard-type differentiability, which imp
Rajdl, Kamil; Lansky, Petr
2014-02-01
Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation. PMID:24245675
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
Market conditions affecting energy efficiency investments
International Nuclear Information System (INIS)
The global energy efficiency market is growing, due in part to energy sector and macroeconomic reforms and increased awareness of the environmental benefits of energy efficiency. Many countries have promoted open, competitive markets, thereby stimulating economic growth. They have reduced or removed subsidies on energy prices, and governments have initiated energy conservation programs that have spurred the wider adoption of energy efficiency technologies. The market outlook for energy efficiency is quite positive. The global market for end-use energy efficiency in the industrial, residential and commercial sectors is now estimated to total more than $34 billion per year. There is still enormous technical potential to implement energy conservation measures and to upgrade to the best available technologies for new investments. For many technologies, energy-efficient designs now represent less than 10--20% of new product sales. Thus, creating favorable market conditions should be a priority. There are a number of actions that can be taken to create favorable market conditions for investing in energy efficiency. Fostering a market-oriented energy sector will lead to energy prices that reflect the true cost of supply. Policy initiatives should address known market failures and should support energy efficiency initiatives. And market transformation for energy efficiency products and services can be facilitated by creating an institutional and legal structure that favors commercially-oriented entities
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the...... parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step to...
Transverse Spectral Velocity Estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2014-01-01
array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile...... flow using the Womersly–Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer....... A pump generated artificial femoral and carotid artery flow in the phantom. The estimated spectra degrade when the angle is different from 90°, but are usable down to 60° to 70°. Below this angle the traditional spectrum is best and should be used. The conventional approach can automatically be...
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...... estimator automatically compensates for the axial velocity, when determining the transverse velocity by using fourth order moments rather than second order moments. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by using averaging of...... RF samples. Further, compensation for the axial velocity can be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce spatial velocity dispersion....
DEFF Research Database (Denmark)
2015-01-01
A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the...... communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence....... The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested and...... compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set of...
Revisiting energy efficiency fundamentals
Energy Technology Data Exchange (ETDEWEB)
Perez-Lombard, L.; Velazquez, D. [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville (Spain); Ortiz, J. [Building Research Establishment (BRE), Garston, Watford, WD25 9XX (United Kingdom)
2013-05-15
Energy efficiency is a central target for energy policy and a keystone to mitigate climate change and to achieve a sustainable development. Although great efforts have been carried out during the last four decades to investigate the issue, focusing into measuring energy efficiency, understanding its trends and impacts on energy consumption and to design effective energy efficiency policies, many energy efficiency-related concepts, some methodological problems for the construction of energy efficiency indicators (EEI) and even some of the energy efficiency potential gains are often ignored or misunderstood, causing no little confusion and controversy not only for laymen but even for specialists. This paper aims to revisit, analyse and discuss some efficiency fundamental topics that could improve understanding and critical judgement of efficiency stakeholders and that could help in avoiding unfounded judgements and misleading statements. Firstly, we address the problem of measuring energy efficiency both in qualitative and quantitative terms. Secondly, main methodological problems standing in the way of the construction of EEI are discussed, and a sequence of actions is proposed to tackle them in an ordered fashion. Finally, two key topics are discussed in detail: the links between energy efficiency and energy savings, and the border between energy efficiency improvement and renewable sources promotion.
CEE Energy Efficiency Report - Slovakia
International Nuclear Information System (INIS)
need to be increased significantly if the proposed targets are to be realised. This increase in budget allocation would enable the implementation of programmes to significantly reduce energy imports and therefore lead to an improvement in the balance of payments. The adoption of these instruments will be beneficial for the entire economy. The most obvious impact is related to the level of energy imports, and therefore the balance of payments. The reduction in energy imports is estimated between 8% (low targets) and 12% (high targets) for natural gas, and between 8% (low targets) and 14% (high targets) for petroleum products. Furthermore it is estimated that the implementation of the proposed energy efficiency could create approximately 10,000 new jobs. The annual reduction in CO2 emissions has been estimated between 9 million tonnes (low targets) and 16 million tonnes (high targets)
Efficient statistical classification of satellite measurements
Mills, Peter
2012-01-01
Supervised statistical classification is a vital tool for satellite image processing. It is useful not only when a discrete result, such as feature extraction or surface type, is required, but also for continuum retrievals by dividing the quantity of interest into discrete ranges. Because of the high resolution of modern satellite instruments and because of the requirement for real-time processing, any algorithm has to be fast to be useful. Here we describe an algorithm based on kernel estimation called Adaptive Gaussian Filtering that incorporates several innovations to produce superior efficiency as compared to three other popular methods: k-nearest-neighbour (KNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). This efficiency is gained with no compromises: accuracy is maintained, while estimates of the conditional probabilities are returned. These are useful not only to gauge the accuracy of an estimate in the absence of its true value, but also to re-calibrate a retrieved image and...
Shrinkage Estimators for Covariance Matrices
Daniels, Michael J.; Kass, Robert E.
2001-01-01
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer paramet...
Ahmad, Mukhtar
2012-01-01
State estimation is one of the most important functions in power system operation and control. This area is concerned with the overall monitoring, control, and contingency evaluation of power systems. It is mainly aimed at providing a reliable estimate of system voltages. State estimator information flows to control centers, where critical decisions are made concerning power system design and operations. This valuable resource provides thorough coverage of this area, helping professionals overcome challenges involving system quality, reliability, security, stability, and economy.Engineers are
Pore Velocity Estimation Uncertainties
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
Decentralized Distributed Bayesian Estimation
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf
Christensen, Mads
2009-01-01
Periodic signals can be decomposed into sets of sinusoids having frequencies that are integer multiples of a fundamental frequency. The problem of finding such fundamental frequencies from noisy observations is important in many speech and audio applications, where it is commonly referred to as pitch estimation. These applications include analysis, compression, separation, enhancement, automatic transcription and many more. In this book, an introduction to pitch estimation is given and a number of statistical methods for pitch estimation are presented. The basic signal models and associated es
Image Enhancement with Statistical Estimation
Directory of Open Access Journals (Sweden)
Soumen Kanrar
2012-05-01
Full Text Available Contrast enhancement is an important area of research for the image analysis. Over the decade, the researcher worked on this domain to develop an efficient and adequate algorithm. The proposed method will enhance the contrast of image using Binarization method with the help of Maximum Likelihood Estimation (MLE. The paper aims to enhance the image contrast of bimodal and multi-modal images. The proposed methodology use to collect mathematical information retrieves from the image. In this paper, we are using binarization method that generates the desired histogram by separating image nodes. It generates the enhanced image using histogram specification with binarization method. The proposed method has showed an improvement in the image contrast enhancement compare with the other image.
Semi-blind Channel Estimator for OFDM-STC
Institute of Scientific and Technical Information of China (English)
WU Yun; LUO Han-wen; SONG Wen-tao; HUANG Jian-guo
2007-01-01
Channel state information of OFDM-STC system is required for maximum likelihood decoding. A subspace-based semi-blind method was proposed for estimating the channels of OFDM-STC systems. The channels are first estimated blindly up to an ambiguity parameter utilizing the nature structure of STC, irrespective of the underlying signal constellations. Furthermore, a method was proposed to resolve the ambiguity by using a few pilot symbols. The simulation results show the proposed semi-blind estimator can achieve higher spectral efficiency and provide improved estimation performance compared to the non-blind estimator.
Maertens, A P; Yue, D K P
2014-01-01
It is shown that the system efficiency of a self-propelled flexible body is ill-defined unless one considers the concept of quasi-propulsive efficiency, defined as the ratio of the power needed to tow a body in rigid-straight condition over the power it needs for self-propulsion, both measured for the same speed. Through examples we show that the quasi-propulsive efficiency is the only rational non-dimensional metric of the propulsive fitness of fish and fish-like mechanisms. Using two-dimensional viscous simulations and the concept of quasi-propulsive efficiency, we discuss the efficiency two-dimensional undulating foils. We show that low efficiencies, due to adverse body-propulsor hydrodynamic interactions, cannot be accounted for by the increase in friction drag.
The unlikely Carnot efficiency.
Verley, Gatien; Esposito, Massimiliano; Willaert, Tim; Van den Broeck, Christian
2014-01-01
The efficiency of an heat engine is traditionally defined as the ratio of its average output work over its average input heat. Its highest possible value was discovered by Carnot in 1824 and is a cornerstone concept in thermodynamics. It led to the discovery of the second law and to the definition of the Kelvin temperature scale. Small-scale engines operate in the presence of highly fluctuating input and output energy fluxes. They are therefore much better characterized by fluctuating efficiencies. In this study, using the fluctuation theorem, we identify universal features of efficiency fluctuations. While the standard thermodynamic efficiency is, as expected, the most likely value, we find that the Carnot efficiency is, surprisingly, the least likely in the long time limit. Furthermore, the probability distribution for the efficiency assumes a universal scaling form when operating close-to-equilibrium. We illustrate our results analytically and numerically on two model systems. PMID:25221850
Halkos, George; Tzeremes, Nickolaos
2005-01-01
In this paper we use the Data Envelopment Analysis (DEA) window method to compare trade efficiency for 16 OECD countries and for the time period 1996–2000. From the analysis we obtained the efficiency scores and the optimal output levels for inefficient countries for all years under consideration. Results drawn from the broadly used ratio analysis were also compared to those derived from the DEA model. It seems that trade efficient countries have clear characteristics. These are the low exch...
The "Speculative Efficiency" Hypothesis
John F. O. Bilson
1980-01-01
The hypothesis that forward prices are the best unbiased forecast of future spot prices is often presented in the economic and financial analysis of futures markets. This paper considers the hypothesis independently of its implications for rational expectations or market efficiency and in order to stress this fact, the term "speculative efficiency" is used to characterize the state envisaged under the hypothesis. If a market is subject to efficient speculation, the supply of speculative funds...
Estimation of measurement variances
International Nuclear Information System (INIS)
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented