Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
On Asymptotically Efficient Estimation in Semiparametric Models
Schick, Anton
1986-01-01
A general method for the construction of asymptotically efficient estimates in semiparametric models is presented. It improves and modifies Bickel's (1982) construction of adaptive estimates and obtains asymptotically efficient estimates under conditions weaker than those in Bickel.
Estimating the NIH efficient frontier.
Directory of Open Access Journals (Sweden)
Dimitrios Bisias
Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Management systems efficiency estimation in tourism organizations
Alexandra I. Mikheyeva
2011-01-01
The article is concerned with management systems efficiency estimation in tourism organizations, examines effective management systems requirements and characteristics in tourism organizations and takes into account principles of management systems formation.
Efficient estimation of price adjustment coefficients
Lyhagen, Johan
1999-01-01
The price adjustment coefficient model of Amihud and Mendelson (1987) is shown to be suitable for estimation by the Kalman filter. A techique that, under some commonly used conditions, is asymptotically efficient. By Monte Carlo simulations it is shown that both bias and mean squared error are much smaler compared to the estimator proposed by Damodaran and Lim (1991) and Damodaran (1993). A test for the adeqacy of the model is also proposed. Using data from four minor, the nordic countries ex...
Higher Efficiency of Motion Estimation Methods
Directory of Open Access Journals (Sweden)
J. Gamec
2004-12-01
Full Text Available This paper presents a new motion estimation algorithm to improve theperformance of the existing searching algorithms at a relative lowcomputational cost. We try to amend the incorrect and/or inaccurateestimate of motion with higher precision by using adaptive weightedmedian filtering and its modifications. The median filter iswell-known. A more general filter, called the Adaptively WeightedMedian Filter (AWM, of which the median filter is a special case, isdescribed. The submitted modifications conditionally use the AWM andfull search algorithm (FSA. Simulation results show that the proposedtechnique can efficiently improve the motion estimation performance.
Efficient estimation of rare-event kinetics
Trendelkamp-Schroer, Benjamin
2014-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the proces...
Microbiological estimate of parodontitis laser therapy efficiency
Mamedova, F. M.; Akbarova, Ju. A.; Bajenov, L. G.; Arslanbekov, T. U.
1995-04-01
In this work was carried out microbiological estimate the efficiency of ultraviolet and He-Ne laser radiation at the treatment of parodontitis. 90 persons was investigated with parodontitis of middle serious diagnosis. The optimal regimes of ultraviolet radiation influence on various micro-organisms discharged from pathologic tooth pocket (PTP) were determined. On the base of specils microflora composition study and data of microbic PTP dissemination owe may conclude that the complex He- Ne and ultraviolet laser radiation show the most pronounced antimicrobic effect.
Holzegel, Gustav
2016-01-01
We generalize our unique continuation results recently established for a class of linear and nonlinear wave equations $\\Box_g \\phi + \\sigma \\phi = \\mathcal{G} ( \\phi, \\partial \\phi )$ on asymptotically anti-de Sitter (aAdS) spacetimes to aAdS spacetimes admitting non-static boundary metrics. The new Carleman estimates established in this setting constitute an essential ingredient in proving unique continuation results for the full nonlinear Einstein equations, which will be addressed in forthcoming papers. Key to the proof is a new geometrically adapted construction of foliations of pseudoconvex hypersurfaces near the conformal boundary.
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite that parametric estimation methods have a superior estimation accuracy. However, these paramet......Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite that parametric estimation methods have a superior estimation accuracy. However...... benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available online....
ON THE UNBIASED ESTIMATOR OF THE EFFICIENT FRONTIER
OLHA BODNAR; TARAS BODNAR
2010-01-01
In the paper, we derive an unbiased estimator of the efficient frontier. It is shown that the suggested estimator corrects the overoptimism of the sample efficient frontier documented in Siegel and Woodgate (2007). Moreover, an exact F-test on the efficient frontier is presented.
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
A new relative efficiency in parameter estimation for linear model
Institute of Scientific and Technical Information of China (English)
YANG Hu; CHEN Zhu-liang
2007-01-01
A new relative efficiency of parameter estimation for generalized Gauss-Markov linear model was proposed. Its lower bound was also derived. Its properties were explored in comparison with three currently very popular relative efficiencies. The new relative efficiency not only reflects sensitively the error and loss caused by the substitution of the least square estimator for the best linear unbiased estimator, but also overcomes the disadvantage of weak dependence on the design matrix.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...
Higher Efficiency of Motion Estimation Methods
J. Gamec; Marchevsky, S.; Gamcova, M.
2004-01-01
This paper presents a new motion estimation algorithm to improve the performance of the existing searching algorithms at a relative low computational cost. We try to amend the incorrect and/or inaccurate estimate of motion with higher precision by using adaptive weighted median filtering and its modifications. The median filter is well-known. A more general filter, called the Adaptively Weighted Median Filter (AWM), of which the median filter is a special case, is described. The submitted mod...
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Bing Liu; Zhen Chen; Xiangdong Liu; Fan Yang
2014-01-01
Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is perfor...
Quantum enhanced estimation of optical detector efficiencies
Directory of Open Access Journals (Sweden)
Barbieri Marco
2016-01-01
Full Text Available Quantum mechanics establishes the ultimate limit to the scaling of the precision on any parameter, by identifying optimal probe states and measurements. While this paradigm is, at least in principle, adequate for the metrology of quantum channels involving the estimation of phase and loss parameters, we show that estimating the loss parameters associated with a quantum channel and a realistic quantum detector are fundamentally different. While Fock states are provably optimal for the former, we identify a crossover in the nature of the optimal probe state for estimating detector imperfections as a function of the loss parameter using Fisher information as a benchmark. We provide theoretical results for on-off and homodyne detectors, the most widely used detectors in quantum photonics technologies, when using Fock states and coherent states as probes.
How efficient is estimation with missing data?
DEFF Research Database (Denmark)
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai;
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data...... train a Gaussian mixture model (GMM). We test the trained GMM for two cases, in which test dataset is missing or complete. The results show that CEM is the most efficient method in both cases while MI is the worst performer of the three. PW and CEM proves to be more stable, in particular for higher MDP...
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2016-01-01
, these parametric methods are much more costly to run. In this paper, we propose an algorithm which significantly reduces the cost of an accurate maximum likelihood-based estimator for real-valued data. The speed up is obtained by exploiting the matrix structure of the problem and by using a recursive solver. Via...
Efficient estimation for high similarities using odd sketches
DEFF Research Database (Denmark)
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
comparison. We present a theoretical analysis of the quality of estimation to guarantee the reliability of Odd Sketch-based estimators. Our experiments confirm this efficiency, and demonstrate the efficiency of Odd Sketches in comparison with $b$-bit minwise hashing schemes on association rule learning...
Efficient estimation for ergodic diffusions sampled at high frequency
DEFF Research Database (Denmark)
Sørensen, Michael
of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in the sense......A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...... of estimators including most of the pre- viously proposed estimators for diffusion processes, for instance GMM-estimators and the maximum likelihood estimator. Simple conditions are given that ensure rate optimality, where estimators of parameters in the diffusion coefficient converge faster than estimators...
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Directory of Open Access Journals (Sweden)
Bing Liu
2014-01-01
Full Text Available Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is performed in four stages. First, the local attitude estimation error system is described by a polytopic linear model. Then the local error attitude estimator is designed with constant coefficients based on the robust H2 filtering algorithm. Subsequently, the attitude predictions and the local error attitude estimations are calculated by a gyro based model and the local error attitude estimator. Finally, the attitude estimations are updated by the predicted attitude with the local error attitude estimations. Since the local error attitude estimator is with constant coefficients, it does not need to calculate the matrix inversion for the filter gain matrix or update the Jacobian matrixes online to obtain the local error attitude estimations. As a result, the computational complexity of the proposed attitude estimator reduces significantly. Simulation results demonstrate the efficiency of the proposed attitude estimation strategy.
Efficiency of U.S. Tissue Perfusion Estimators.
Kim, MinWoo; Abbey, Craig K; Insana, Michael F
2016-08-01
We measure the detection and discrimination efficiencies of conventional power-Doppler estimation of perfusion without contrast enhancement. The measurements are made in a phantom with known blood-mimicking fluid flow rates in the presence of clutter and noise. Efficiency is measured by comparing functions of the areas under the receiver operating characteristic curve for Doppler estimators with those of the ideal discriminator, for which we estimate the temporal covariance matrix from echo data. Principal-component analysis is examined as a technique for increasing the accuracy of covariance matrices estimated from echo data. We find that Doppler estimators are discriminating between two perfusion rates in the same range. We conclude that there are reasons to search for more efficient perfusion estimators, one that incorporates covariance matrix information that could significantly enhance the utility of Doppler ultrasound without contrast enhancement. PMID:27244733
Robust and efficient estimation with weighted composite quantile regression
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Technical Efficiency Estimation of Rice Production in South Korea
Mohammed, Rezgar; Saghaian, Sayed
2014-01-01
This paper uses stochastic frontier production function to estimates the technical efficiency of rice production in South Korea. Data from eight provinces have been taken between 1993 and 2012. The purpose of this study is to realize whether the agricultural policy made by the Korean government achieved a high technical efficiency in rice production and also to figure out the variables that could decrease a technical inefficiency in rice production. The study showed there is a possibility to ...
Estimation of technical efficiency in production technologies of Czech sawmills
Directory of Open Access Journals (Sweden)
Sedivka Premysl
2009-12-01
Full Text Available The main aim of this paper is to determine the influence of the type of adopted production technology on the technical efficiency of Czech sawmills, using one-year data of sawmills and applying a stochastic frontier production function model. Individual technical efficiencies have been obtained for small, medium and large sawmills, and their determinants have been estimated using a procedure proposed by Battese and Coelli (1995. The results support the hypothesis that sawmills in the sample failed to achieve full technical efficiency.
Efficient Bayesian Estimation and Combination of GARCH-Type Models
D. David (David); L.F. Hoogerheide (Lennart)
2010-01-01
textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation
Improving Woody Biomass Estimation Efficiency Using Double Sampling
Directory of Open Access Journals (Sweden)
B. Scott Shouse
2012-05-01
Full Text Available Although double sampling has been shown to be an effective method to estimate timber volume in forest inventories, only a limited body of research has tested the effectiveness of double sampling on forest biomass estimation. From forest biomass inventories collected over 9,683 ha using systematic point sampling, we examined how a double sampling scheme would have affected precision and efficiency in these biomass inventories. Our results indicated that double sample methods would have yielded biomass estimations with similar precision as systematic point sampling when the small sample was ≥ 20% of the large sample. When the small to large sample time ratio was 3:1, relative efficiency (a combined measure of time and precision was highest when the small sample was a 30% subsample of the large sample. At a 30% double sample intensity, there was a < 3% deviation from the original percent margin of error and almost half the required time. Results suggest that double sampling can be an efficient tool for natural resource managers to estimate forest biomass.
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
Recent estimates of energy efficiency potential in the USA
Energy Technology Data Exchange (ETDEWEB)
Sreedharan, P. [Energy and Environmental Economics E3, 101 Montgomery Street, 16th Floor, San Francisco, CA 94104 (United States)
2013-08-15
Understanding the potential for reducing energy demand through increased end-use energy efficiency can inform energy and climate policy decisions. However, if potential estimates are vastly different, they engender controversial debates, clouding the usefulness of energy efficiency in shaping a clean energy future. A substantive question thus arises: is there a general consensus on the potential estimates? To answer this question, this paper reviews recent studies of US national and regional energy efficiency potential in buildings and industry. Although these studies are based on differing assumptions, methods, and data, they suggest technically possible reductions of circa 25-40 % in electricity demand and circa 30 % in natural gas demand in 2020 and economic reductions of circa 10-25 % in electricity demand and circa 20 % in natural gas demand in 2020. These estimates imply that electricity growth from 2009 to 2020 ranges from turning US electricity demand growth negative, to reducing it to a growth rate of circa 0.3 %/year (compared to circa 1 % baseline growth)
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Kernel density estimation of a multidimensional efficiency profile
Poluektov, Anton
2014-01-01
Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Efficient Timing and Frequency Offset Estimation Scheme for OFDM Systems
Institute of Scientific and Technical Information of China (English)
GUO Yi; GE Jianhua; LIU Gang; ZHANG Wujun
2009-01-01
A new training symbol weighted by pseudo-noise(PN) sequence is designed and an efficient timing and fre quency offset estimation scheme for orthogonal frequency division multiplcxing(OFDM)systems is proposed.The timing synchronization is accomplished by using the piecewise symmetric conjugate of the primitive training symbol and the good autocorrelation of PN weighted factor.The frequency synchronization is finished by utilizing the training symbol whose PN weighted factor is removed after the timing synchronization.Compared with conventional schemes,the proposed scheme can achieve a smaller mean square error and provide a wider frequency acquisition range.
Concurrent estimation of efficiency, effectiveness and returns to scale
Khodakarami, Mohsen; Shabani, Amir; Farzipoor Saen, Reza
2016-04-01
In recent years, data envelopment analysis (DEA) has been widely used to assess both efficiency and effectiveness. Accurate measurement of overall performance is a product of concurrent consideration of these measures. There are a couple of well-known methods to assess both efficiency and effectiveness. However, some issues can be found in previous methods. The issues include non-linearity problem, paradoxical improvement solutions, efficiency and effectiveness evaluation in two independent environments: dividing an operating unit into two autonomous departments for performance evaluation and problems associated with determining economies of scale. To overcome these issues, this paper aims to develop a series of linear DEA methods to estimate efficiency, effectiveness, and returns to scale of decision-making units (DMUs), simultaneously. This paper considers the departments of a DMU as a united entity to recommend consistent improvements. We first present a model under constant returns to scale (CRS) assumption, and examine its relationship with one of existing network DEA model. We then extend model under variable returns to scale (VRS) condition, and again its relationship with one of existing network DEA models is discussed. Next, we introduce a new integrated two-stage additive model. Finally, an in-depth analysis of returns to scale is provided. A case study demonstrates applicability of the proposed models.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
An efficient algebraic approach to observability analysis in state estimation
Energy Technology Data Exchange (ETDEWEB)
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
Efficient Spectral Power Estimation on an Arbitrary Frequency Scale
Directory of Open Access Journals (Sweden)
F. Zaplata
2015-04-01
Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
Mossel, Elchanan
2010-01-01
We propose a simple and efficient Bayesian model of iterative learning on social networks. This model is efficient in two senses: the process both results in an optimal belief, and can be carried out with modest computational resources for large networks. This result extends Condorcet's Jury Theorem to general social networks, while preserving rationality and computational feasibility. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement of S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard dev...
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
The estimation of energy efficiency for hybrid refrigeration system
International Nuclear Information System (INIS)
Highlights: ► We present the experimental setup and the model of the hybrid cooling system. ► We examine impact of the operating parameters of the hybrid cooling system on the energy efficiency indicators. ► A comparison of the final and the primary energy use for a combination of the cooling systems is carried out. ► We explain the relationship between the COP and PER values for the analysed cooling systems. -- Abstract: The concept of the air blast-cryogenic freezing method (ABCF) is based on an innovative hybrid refrigeration system with one common cooling space. The hybrid cooling system consists of a vapor compression refrigeration system and a cryogenic refrigeration system. The prototype experimental setup for this method on the laboratory scale is discussed. The application of the results of experimental investigations and the theoretical–empirical model makes it possible to calculate the cooling capacity as well as the final and primary energy use in the hybrid system. The energetic analysis has been carried out for the operating modes of the refrigerating systems for the required temperatures inside the cooling chamber of −5 °C, −10 °C and −15 °C. For the estimation of the energy efficiency the coefficient of performance COP and the primary energy ratio PER for the hybrid refrigeration system are proposed. A comparison of these coefficients for the vapor compression refrigeration and the cryogenic refrigeration system has also been presented.
Efficient mental workload estimation using task-independent EEG features
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state...
International Nuclear Information System (INIS)
Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load
Efficient estimation of analytic density under random censorship
Belitser, E.
2001-01-01
The nonparametric minimax estimation of an analytic density at a given point, under random censorship, is considered. Although the problem of estimating density is known to be irregular in a certain sense, we make some connections relating this problem to the problem of estimating smooth functionals
Estimation of the Asian telecommunication technical efficiencies with panel data
Institute of Scientific and Technical Information of China (English)
YANG Yu-yong; JIA Huai-jing
2007-01-01
This article used panel data and the Stochastic frontier analysis (SFA) model to analyze and compare the technical efficiencies of the telecommunication industry in 28 Asian countries from 1994 to 2003. In conclusion, the technical efficiencies of the Asian countries were found to steadily increase in the past decade. The high-income countries have the highest technical efficiency; however, income is not the only factor that affects the technical efficiency.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Institute of Scientific and Technical Information of China (English)
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Control grid motion estimation for efficient application of optical flow
Zwart, Christine M
2012-01-01
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable compu
Efficient Estimation of Mutual Information for Strongly Dependent Variables
Gao, Shuyang; Galstyan, Aram
2014-01-01
We demonstrate that a popular class of nonparametric mutual information (MI) estimators based on k-nearest-neighbor graphs requires number of samples that scales exponentially with the true MI. Consequently, accurate estimation of MI between two strongly dependent variables is possible only for prohibitively large sample size. This important yet overlooked shortcoming of the existing estimators is due to their implicit reliance on local uniformity of the underlying joint distribution. We introduce a new estimator that is robust to local non-uniformity, works well with limited data, and is able to capture relationship strengths over many orders of magnitude. We demonstrate the superior performance of the proposed estimator on both synthetic and real-world data.
Efficient and Accurate Robustness Estimation for Large Complex Networks
Wandelt, Sebastian
2016-01-01
Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.
Indexes of estimation of efficiency of the use of intellectual resources of industrial enterprises
Directory of Open Access Journals (Sweden)
Audzeichyk Olga
2015-12-01
Full Text Available The article researches the theoretical and practical aspects of estimation of intellectual resources of industrial enterprises and proposes the method of estimation of efficiency of the use of intellectual resources.
Efficient estimation of burst-mode LDA power spectra
DEFF Research Database (Denmark)
Velte, Clara Marika; George, William K
2010-01-01
The estimation of power spectra from LDA data provides signal processing challenges for fluid dynamicists for several reasons. Acquisition is dictated by randomly arriving particles which cause the signal to be highly intermittent. This both creates self-noise and causes the measured velocities...... requirements for good statistical convergence due to the random sampling of the data. In the present work, the theory for estimating burst-mode LDA spectra using residence time weighting is discussed and a practical estimator is derived and applied. A brief discussion on the self-noise in spectra...... and correlations is included, as well as one regarding the statistical convergence of the spectral estimator for random sampling. Further, the basic representation of the burst-mode LDA signal has been revisited due to observations in recent years of particles not following the flow (e.g., particle clustering...
System of Indicators in Social and Economic Estimation of the Regional Energy Efficiency
Directory of Open Access Journals (Sweden)
Ivan P. Danilov
2012-10-01
Full Text Available The article offers social and economic interpretation of the energy efficiency, modeling of the system of indicators in estimation of the regional social and economic efficiency of the energy resources use.
Thermodynamics estimation of copper plasma efficiency from secondary raw material
Directory of Open Access Journals (Sweden)
Віктор Сергійович Козьмін
2014-09-01
Full Text Available The results of the thermodynamic evaluation of oxidative plasma copper refining efficiency recycled from impurities present in the feedstock are shown. It was established that the type of impurity factor increasing the efficiency of the plasma refining the potential change of Gibbs varies from 1,4 to 4, 8, and for silver, and of gold there is a transition from an unlikely to real positive state.
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...
Energy-Efficient Channel Estimation in MIMO Systems
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Efficient probabilistic planar robot motion estimation given pairs of images
O. Booij; B. Kröse; Z. Zivkovic
2010-01-01
Estimating the relative pose between two camera positions given image point correspondences is a vital task in most view based SLAM and robot navigation approaches. In order to improve the robustness to noise and false point correspondences it is common to incorporate the constraint that the robot m
Transverse correlation: An efficient transverse flow estimator - initial results
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Henze, Lasse; Kortbek, Jacob;
2008-01-01
Color flow mapping has become an important clinical tool, for diagnosing a wide range of vascular diseases. Only the velocity component along the ultrasonic beam is estimated, so to find the actual blood velocity, the beam to flow angle has to be known. Because of the unpredictable nature...
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Efficient Topology Estimation for Large Scale Optical Mapping
Elibol, Armagan; Garcia, Rafael
2013-01-01
Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Estimation of Nitrogen Fertilizer Use Efficiency in Dryland Agroecosystem
Institute of Scientific and Technical Information of China (English)
LI Shi-qing; LI Sheng-xiu
2001-01-01
A field trial was carried out to study the nitrogen fertilizer recovery by four crops in succession in manurial loess soil in Yangling. The results showed that the nitrogen fertilizer not only had the significant effects on the first crop , but also had longer residual effects, even on the fourth crop. The average apparent nitrogen fertilizer recovery by the first crop was 31.7%, and the accumulative nitrogen recovery by the 4 crops was high as 62.3%, and the latter was double as the former. It is quite clear that the nitrogen fertilizer recovery by the first crop was not reliable for estimating the nitrogen fertilizer unless the residual effect of nitrogen fertilizer was included.
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Directory of Open Access Journals (Sweden)
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Ionization efficiency estimations for the SPES surface ion source
Manzolaro, M.; Andrighetto, A.; Meneghetti, G.; Rossignoli, M.; Corradetti, S.; Biasetto, L.; Scarpa, D.; Monetti, A.; Carturan, S.; Maggioni, G.
2013-12-01
Ion sources play a crucial role in ISOL (Isotope Separation On Line) facilities determining, with the target production system, the ion beam types available for experiments. In the framework of the SPES (Selective Production of Exotic Species) INFN (Istituto Nazionale di Fisica Nucleare) project, a preliminary study of the alkali metal isotopes ionization process was performed, by means of a surface ion source prototype. In particular, taking into consideration the specific SPES in-target isotope production, Cs and Rb ion beams were produced, using a dedicated test bench at LNL (Laboratori Nazionali di Legnaro). In this work the ionization efficiency test results for the SPES Ta surface ion source prototype are presented and discussed.
Institute of Scientific and Technical Information of China (English)
SONG Bo-wei; GUAN Yun-feng; ZHANG Wen-jun
2005-01-01
This paper deals with channel estimation for orthogonal frequency-division multiplexing (OFDM) systems with transmit diversity. Space time coded OFDM systems, which can provide transmit diversity, require perfect channel estimation to improve communication quality. In actual OFDM systems, training sequences are usually used for channel estimation. The authors propose a training based channel estimation strategy suitable for space time coded OFDM systems. This novel strategy provides enhanced performance, high spectrum efficiency and relatively low computation complexity.
Efficient human pose estimation from single depth images.
Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew
2013-12-01
We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424
Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models
Institute of Scientific and Technical Information of China (English)
Zhangong ZHOU; Rong JIANG; Weimin QIAN
2011-01-01
The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
Institute of Scientific and Technical Information of China (English)
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Блінцов, Олександр Володимирович; Надточій, Анатолій Вікторович
2014-01-01
From the perspective of managing projects the trends of underwater and technical support of marine archaeological expeditions were set up, a scientific problem of estimating the efficiency of these trends was defined, and a global criterion for estimating the efficiency of using underwater technologies for the stage of drafting deepwater archeology projects was proposed. The urgency of the scientific problem lies in the presence of a large number of underwater objects in the territorial water...
Singbo, Alphonse G.; Lansink, Alfons Oude; Emvalomatis, Grigorios
2015-01-01
This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate technic
Kashnikova, S N; Shcherbakov, P L; Kashnikov, V V; Tatarinov, P A; Shcherbakova, M Iu
2008-01-01
In this article farmakoekonomical analysis of efficiency of various, the most common in Russia schemes of eradication therapy disoders, associated with H. pylori infection is given to by authors basis on their private experience. All-round studying of the different economical factors influencing on a cost of used schemes is realized, and result of spent complex efficiency eradication therapy is estimated.
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
Efficiency assessment of using satellite data for crop area estimation in Ukraine
Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga
2014-06-01
The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.
Marlin, Benjamin
2012-01-01
Standard maximum likelihood estimation cannot be applied to discrete energy-based models in the general case because the computation of exact model probabilities is intractable. Recent research has seen the proposal of several new estimators designed specifically to overcome this intractability, but virtually nothing is known about their theoretical properties. In this paper, we present a generalized estimator that unifies many of the classical and recently proposed estimators. We use results from the standard asymptotic theory for M-estimators to derive a generic expression for the asymptotic covariance matrix of our generalized estimator. We apply these results to study the relative statistical efficiency of classical pseudolikelihood and the recently-proposed ratio matching estimator.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Directory of Open Access Journals (Sweden)
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
Institute of Scientific and Technical Information of China (English)
AkiraOgawa
1999-01-01
A cyclone dust collector is applied in many industries.Especially the axial flow cyclone is the most simple construction and if keeps high reliability for maintenance.On the other hand,the collection efficiency of the cyclone depends not only on the inlet gas velocity but also on the feed particle concentration.The collection efficiency increases with increasing feed particle concentration.However until now the problem of how to estimate the collection efficiency depended on the feed particle concentration is remained except the investigation by Muschelknautz & Brunner[6],Therefore in this paper one of the estimate method for the collection efficiency of the axial flow cyclones is proposed .The application to the geometrically similar type of cyclone of the body diameters D1=30,50,69and 99mm showed in good agreement with the experimental results of the collection efficiencies which were described in detail in the paper by ogawa & Sugiyama[8].
Estimates of HVAC filtration efficiency for fine and ultrafine particles of outdoor origin
Azimi, Parham; Zhao, Dan; Stephens, Brent
2014-12-01
This work uses 194 outdoor particle size distributions (PSDs) from the literature to estimate single-pass heating, ventilating, and air-conditioning (HVAC) filter removal efficiencies for PM2.5 and ultrafine particles (UFPs: HVAC filters identified in the literature. Filters included those with a minimum efficiency reporting value (MERV) of 5, 6, 7, 8, 10, 12, 14, and 16, as well as HEPA filters. We demonstrate that although the MERV metric defined in ASHRAE Standard 52.2 does not explicitly account for UFP or PM2.5 removal efficiency, estimates of filtration efficiency for both size fractions increased with increasing MERV. Our results also indicate that outdoor PSD characteristics and assumptions for particle density and typical size-resolved infiltration factors (in the absence of HVAC filtration) do not drastically impact estimates of HVAC filter removal efficiencies for PM2.5. The impact of these factors is greater for UFPs; however, they are also somewhat predictable. Despite these findings, our results also suggest that MERV alone cannot always be used to predict UFP or PM2.5 removal efficiency given the various size-resolved removal efficiencies of different makes and models, particularly for MERV 7 and MERV 12 filters. This information improves knowledge of how the MERV designation relates to PM2.5 and UFP removal efficiency for indoor particles of outdoor origin. Results can be used to simplify indoor air quality modeling efforts and inform standards and guidelines.
Estimation de l'efficacité de systèmes ternaires Estimating the Efficiency of Tenary Systems
Directory of Open Access Journals (Sweden)
Castells Pique F.
2006-11-01
Full Text Available Les efficacités de Murphree et de vaporisation d'un système ternaire (Benzène-Heptane-Toluène sont déterminées expérimentalement et à partir de corrélations, en fonction de nombres sans dimension, établies pour les trois binaires correspondants. Les profils de concentration mesurés dans une colonne pilote de 10 plateaux perforés et les profils calculés en tenant compte de l'efficacité estimée à l'aide des corrélations sont en bon accord, ce qui permet d'envisager l'utilisation de certaines méthodes de prédiction d'efficacité de systèmes binaires pour des systèmes complexes. Murphree and vaporisation efficiency for a ternary system (Benzène-Heptane-Toluene are determined byexperimentsand from dimensionless correlations established for the three corresponding binary systems. Calculated and experiment concentration profiles are compared for a perforated plate pilot column. Calculated profiles are determined with efficiency estimated from the binary system efficiency correlations. Comparison givesa good concordance, sa its possible ta ihink of using someof binary efficiency prediction methods for multicomponent systems.
Estimating the net implicit price of energy efficient building codes on U.S. households
International Nuclear Information System (INIS)
Requiring energy efficiency building codes raises housing prices (or the monthly rental equivalent), but theoretically this effect might be fully offset by reductions in household energy expenditures. Whether there is a full compensating differential or how much households are paying implicitly is an empirical question. This study estimates the net implicit price of energy efficient buildings codes, IECC 2003 through IECC 2006, for American households. Using sample data from the American Community Survey 2007, a heteroskedastic seemingly unrelated estimation approach is used to estimate hedonic price (house rent) and energy expenditure models. The value of energy efficiency building codes is capitalized into housing rents, which are estimated to increase by 23.25 percent with the codes. However, the codes provide households a compensating differential of about a 6.47 percent reduction (about $7.71) in monthly energy expenditure. Results indicate that the mean household net implicit price for these codes is about $140.87 per month in 2006 dollars ($163.19 in 2013 dollars). However, this estimated price is shown to vary significantly by region, energy type and the rent gradient. - Highlights: • House rent increases by 23.25 percent with the energy efficiency codes. • Compensating differential of the codes is 6.47 percent. • Net implicit price of effect of energy efficiency building codes is about $140.87
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Institute of Scientific and Technical Information of China (English)
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki
2010-10-15
Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. PMID:20713298
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Directory of Open Access Journals (Sweden)
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
Directory of Open Access Journals (Sweden)
Sobchak Andrii
2016-02-01
Full Text Available The concept of hyperstability of the cybernetic system is considered in an appendix to the task of estimation of efficiency of virtual productive enterprise functioning. The basic factors, influencing on efficiency of functioning of such enterprise are determined. The article offers the methodology of synthesis of static structure of the system of support of making decision by managers of virtual enterprise, in particular procedure of determination of numerical and high-quality strength of equipment, producible on a virtual enterprise.
Estimating welfare changes from efficient pricing in public bus transit in India
Deb, Kaushik; Filippini, Massimo
2011-01-01
Three different and feasible pricing strategies for public bus transport in India are developed in a partial equilibrium framework with the objective of improving economic efficiency and ensuring revenue adequacy, namely average cost pricing, marginal cost pricing, and two-part tariffs. These are assessed not only in terms of gains in economic efficiency, but also in changes in travel demand and consumer surplus. The estimated partial equilibrium price is higher in all three pricing reg...
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Directory of Open Access Journals (Sweden)
B. Bayram
2006-01-01
Full Text Available Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficient as smaller cows
Institute of Scientific and Technical Information of China (English)
CHUNG Warn-ill; CHOI Jun-ho; BAE Hae-young
2004-01-01
Many commercial database systems maintain histograms to summarize the contents of relations and permit the efficient estimation of query result sizes and the access plan cost. In spatial database systems, most spatial query predicates are consisted of topological relationships between spatial objects, and it is very important to estimate the selectivity of those predicates for spatial query optimizer. In this paper, we propose a selectivity estimation scheme for spatial topological predicates based on the multidimensional histogram and the transformation scheme. Proposed scheme applies twopartition strategy on transformed object space to generate spatial histogram and estimates the selectivity of topological predicates based on the topological characteristics of the transformed space. Proposed scheme provides a way for estimating the selectivity without too much memory space usage and additional I/Os in most spatial query optimizers.
The output estimation of a DMU to preserve and improvement of the relative efficiency
Directory of Open Access Journals (Sweden)
Masoud Sanei
2013-10-01
Full Text Available In this paper, we consider the inverse BCC model is used to estimate output levels of the Decision Making Units (DMUs, when the input levels are changed and maintain the efficiency index for all DMUs. Since the inverse BCC problem is in the form of a multi objective nonlinear programming model (MONLP, which is not easy to solve. Therefore, we propose a linear programming model, which gives a Pareto-efficient solution to the inverse BCC problem. So far, we propose a model for improvement of the current efficiency value for considered DMU. Numerical examples are, also, used to illustrate the proposed approaches.
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of...
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas. The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
CLASSIFICATION AND ESTIMATION OF THE EFFICIENCY OF SYSTEMS FOR UNINTERRUPTED ELECTROSUPPLY
Directory of Open Access Journals (Sweden)
Vinnikov A. V.
2015-03-01
Full Text Available In the article we present generalized block diagrams of stationary and transport systems of uninterrupted electrosupply, as well as their maintenance and the basic operating modes providing uninterrupted electrosupply of crucial consumers. Classification of systems of uninterrupted electrosupply is resulted. The basic classification attributes of systems of uninterrupted electrosupply are their assignment for stationary or transport consumers of the electric power, types of used basic, reserve and emergency sources and converters of the electric power. Besides systems of uninterrupted electrosupply can be classified under circuits of connection to consumers of the electric power, their division on a sort of a current (constant, variable, high-frequency, breaks in electrosupply, to type of the switching equipment and so on. For the estimation of the efficiency of systems of uninterrupted electrosupply it is offered to use the following criteria of efficiency: Power and weight-dimension parameters, parameters of reliability, quality of the electric power and cost. Analytical expressions for calculation of parameters of the estimation of efficiency of systems of uninterrupted electrosupply are resulted. The classification of systems of uninterrupted electrosupply suggested in article and modes of their work, and also the basic criteria of an estimation of efficiency will allow raising efficiency of pre-design works on creation of systems with improved customer characteristics with use of modern element base
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Directory of Open Access Journals (Sweden)
Robertson Patrick
2010-01-01
Full Text Available Multipath is today still one of the most critical problems in satellite navigation, in particular in urban environments, where the received navigation signals can be affected by blockage, shadowing, and multipath reception. Latest multipath mitigation algorithms are based on the concept of sequential Bayesian estimation and improve the receiver performance by exploiting the temporal constraints of the channel dynamics. In this paper, we specifically address the problem of estimating and adjusting the number of multipath replicas that is considered by the receiver algorithm. An efficient implementation via a two-fold marginalized Bayesian filter is presented, in which a particle filter, grid-based filters, and Kalman filters are suitably combined in order to mitigate the multipath channel by efficiently estimating its time-variant parameters in a track-before-detect fashion. Results based on an experimentally derived set of channel data corresponding to a typical urban propagation environment are used to confirm the benefit of our novel approach.
International Nuclear Information System (INIS)
Investing more in renewable energy sources and using this in a rational and efficient way is vital for the sustainable growth of world. Energy efficiency (EE) will play an increasingly important role in future generations. The aim of this work is to estimate how much the PNEf (National Plan for Energy Efficiency) launched by the Brazilian government in 2011 will save over the next 5 years by avoiding the construction of additional power plants, as well as the amount of the CO2 emission. The marginal operating cost is computed for medium term planning of the dispatching of power plants in the hydro-thermal system using Stochastic Dynamic Dual Programming, after incorporating stochastic energy efficiencies into the demand for electricity. We demonstrate that even for a modest improvement in energy efficiency (<1% per year), the savings over the next 5 years range from R$ 237 million in the conservative scenario to R$ 268 million in the optimistic scenario. By comparison the new Belo Monte hydro-electric plant will cost R$ 26 billion to be repaid over a 30 year period (i.e. R$ 867 million in 5 years). So in Brazil EE policies are preferable to building a new power plant. - Highlights: • It is preferable to invest in energy efficiency than to construct a big power plant. • An increase of energy efficiency policies would reduce the operating cost in Brazil. • We have a great reduction of CO2eq emissions by means of energy efficiency policies
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Technical Efficiency of Shrimp Farming in Andhra Pradesh: Estimation and Implications
Directory of Open Access Journals (Sweden)
I. Sivaraman
2015-04-01
Full Text Available Shrimp farming is a key subsector of Indian aquaculture which has seen a remarkable growth in the past decades and has a tremendous potential in future. The present study analyzes the technical efficiency of the shrimp famers of East Godavari district of Andhra Pradesh using the Stochastic Production Frontier Function with the technical inefficiency effects. The estimates mean technical efficiency of the farmers was 93.06 % which means the farmers operate at 6.94 % below the production frontier production. Age, education, experience of the farmers and their membership status in farmers associations and societies were found to have a significant effect on the technical efficiency. The variation in the technical efficiency also confirms the differences in the extent of adoption of the shrimp farming technology among the farmers. Proper technical training opportunities could facilitate the farmers to adopt the improved technologies to increase their farm productivity.
International Nuclear Information System (INIS)
In many recent years, the gamma spectrometry using the high purity germanium (HPGe) detector have come into widespread use to determine the activity of radioactive samples. However, the decrease in detector efficiency remarkably influences on the result of measured gamma spectra. In this work, we estimated the decrease in efficiency of the GC1518 HPGe detector made in Canberra Industries, Inc. and located at the Center for HCMC Nuclear Techniques. It was found that the detector efficiency reduces to 8% within 6 years from October 1999 to August 2005. The decrease in efficiency can be explained by increase in the thickness of an inactive germanium layer based on using the Monte Carlo simulation. (author)
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina;
2012-01-01
Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Directory of Open Access Journals (Sweden)
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
Energy Technology Data Exchange (ETDEWEB)
Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)
2001-03-01
This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.
Energy Technology Data Exchange (ETDEWEB)
Sabatelli, V.; Marano, D.; Braccio, G.; Sharma, V.K. [ENEA CR Trisaia, Solar Energy Lab., Rotondella (Italy)
2002-11-01
The results obtained from efficiency tests conducted on a flat plate solar collector, according to the ISO 9806/1 test procedure, have been used to determine the uncertainty in the curve fitting parameters. The said standard, though requiring certain levels of accuracy in the measuring process, does not provide any method to determine the uncertainty of the efficiency curve parameters. The methodology used in the present paper (not provided by the ISO standard) allows solving the above mentioned problem and evaluating not only the parameters and their uncertainties but also the reliability of the test procedure and its goodness toward fitness. In order to evaluate the effects of measurement errors on the values of the uncertainty in estimated parameters, a sensitivity analysis has also been conducted. Strong dependence of some uncertainties, involving a larger accuracy level in the estimation of the measured parameters, is a clear indication of the present investigations. (Author)
Estimating the Effect of Helium and Nitrogen Mixing on Deposition Efficiency in Cold Spray
Ozdemir, Ozan C.; Widener, Christian A.; Helfritch, Dennis; Delfanian, Fereidoon
2016-04-01
Cold spray is a developing technology that is increasingly finding applications for coating of similar and dissimilar metals, repairing geometric tolerance defects to extend expensive part life and additive manufacturing across a variety of industries. Expensive helium is used to accelerate the particles to higher velocities in order to achieve the highest deposit strengths and to spray hard-to-deposit materials. Minimal information is available in the literature studying the effects of He-N2 mixing on coating deposition efficiency, and how He can potentially be conserved by gas mixing. In this study, a one-dimensional simulation method is presented for estimating the deposition efficiency of aluminum coatings, where He-N2 mixture ratios are varied. The simulation estimations are experimentally validated through velocity measurements and single particle impact tests for Al6061.
Meyers, S.; Marnay, C.; Schumacher, K.; Sathaye, J.
2000-01-01
This paper describes a standardized method for establishing a multi-project baseline for a power system. The method provides an approximation of the generating sources that are expected to operate on the margin in the future for a given electricity system. It is most suitable for small-scale electricity generation and electricity efficiency improvement projects. It allows estimation of one or more carbon emissions factors that represent the emissions avoided by projects, striking a bala...
Ambient vibrations efficiency for building dynamic characteristics estimate and seismic evaluation.
Dunand, François
2005-01-01
Ambient vibrations are mechanical low amplitude vibrations generated by human and natural activities. By forcing into vibration engineering structures, these vibrations can be used to estimate the structural dynamic characteristics.The goal of this study is to compare building dynamic characteristics derived from ambient vibrations to those derived from more energetic solicitations (e.g. earthquake). This study validates the efficiency of this method and shows that ambient vibrations results ...
Estimating the efficiency of sustainable development by South African mining companies
Oberholzer, Merwe; Prinsloo, Thomas Frederik
2011-01-01
The purpose of the study was to develop a model, using data envelopment analysis (DEA), in order to estimate the relative efficiency of nine South African listed mining companies in their efforts to convert environmental impact into economic and social gains for shareholders and other stakeholders. The environmental impact factors were used as input variables, that is, greenhouse gas emissions, water usage and energy usage, and the gains for shareholders and other stakeholders were used as ou...
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Technical and Scale Efficiency in Spanish Urban Transport: Estimating with Data Envelopment Analysis
Directory of Open Access Journals (Sweden)
I. M. García Sánchez
2009-01-01
Full Text Available The paper undertakes a comparative efficiency analysis of public bus transport in Spain using Data Envelopment Analysis. A procedure for efficiency evaluation was established with a view to estimating its technical and scale efficiency. Principal components analysis allowed us to reduce a large number of potential measures of supply- and demand-side and quality outputs in three statistical factors assumed in the analysis of the service. A statistical analysis (Tobit regression shows that efficiency levels are negative in relation to the population density and peak-to-base ratio. Nevertheless, efficiency levels are not related to the form of ownership (public versus private. The results obtained for Spanish public transport show that the average pure technical and scale efficiencies are situated at 94.91 and 52.02%, respectively. The excess of resources is around 6%, and the increase in accessibility of the service, one of the principal components summarizing the large number of output measures, is extremely important as a quality parameter in its performance.
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set.
International Nuclear Information System (INIS)
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented. (paper)
Schildbach, Christian; Ong, Duu Sheng; Hartnagel, Hans; Schmidt, Lorenz-Peter
2016-06-01
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented.
Estimation of coupling efficiency of optical fiber by far-field method
Kataoka, Keiji
2010-09-01
Coupling efficiency to a single-mode optical fiber can be estimated with the field amplitudes at far-field of an incident beam and optical fiber mode. We call it the calculation by far-field method (FFM) in this paper. The coupling efficiency by FFM is formulated including effects of optical aberrations, vignetting of the incident beam, and misalignments of the optical fiber such as defocus, lateral displacements, and angle deviation in arrangement of the fiber. As the results, it is shown the coupling efficiency is proportional to the central intensity of the focused spot, i.e., Strehl intensity of a virtual beam determined by the incident beam and mode of the optical fiber. Using the FFM, a typical optics in which a laser beam is coupled to an optical fiber with a lens of finite numerical aperture (NA) is analyzed for several cases of amplitude distributions of the incident light.
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Computationally efficient permutation-based confidence interval estimation for tail-area FDR
Directory of Open Access Journals (Sweden)
Joshua eMillstein
2013-09-01
Full Text Available Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 hour REM sleep (FDR = .15 (.08, .26. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.
Efficient PU Mode Decision and Motion Estimation for H.264/AVC to HEVC Transcoder
Directory of Open Access Journals (Sweden)
Zong-Yi Chen
2014-04-01
Full Text Available H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC, had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs extracted from H.264/AVC can be reused to predict the current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to 53% of the encoding time and maintains the rate-distortion (R-D performance for HEVC.
Efficient methods for joint estimation of multiple fundamental frequencies in music signals
Pertusa, Antonio; Iñesta, José M.
2012-12-01
This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Directory of Open Access Journals (Sweden)
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Study of grain alignment efficiency and a distance estimate for small globule CB4
International Nuclear Information System (INIS)
We study the polarization efficiency (defined as the ratio of polarization to extinction) of stars in the background of the small, nearly spherical and isolated Bok globule CB4 to understand the grain alignment process. A decrease in polarization efficiency with an increase in visual extinction is noticed. This suggests that the observed polarization in lines of sight which intercept a Bok globule tends to show dominance of dust grains in the outer layers of the globule. This finding is consistent with the results obtained for other clouds in the past. We determined the distance to the cloud CB4 using near-infrared photometry (2MASS JHKS colors) of moderately obscured stars located at the periphery of the cloud. From the extinction-distance plot, the distance to this cloud is estimated to be (459 ± 85) pc. (paper)
Efficient estimation of decay parameters in acoustically coupled-spaces using slice sampling.
Jasa, Tomislav; Xiang, Ning
2009-09-01
Room-acoustic energy decay analysis of acoustically coupled-spaces within the Bayesian framework has proven valuable for architectural acoustics applications. This paper describes an efficient algorithm termed slice sampling Monte Carlo (SSMC) for room-acoustic decay parameter estimation within the Bayesian framework. This work combines the SSMC algorithm and a fast search algorithm in order to efficiently determine decay parameters, their uncertainties, and inter-relationships with a minimum amount of required user tuning and interaction. The large variations in the posterior probability density functions over multidimensional parameter spaces imply that an adaptive exploration algorithm such as SSMC can have advantages over the exiting importance sampling Monte Carlo and Metropolis-Hastings Markov Chain Monte Carlo algorithms. This paper discusses implementation of the SSMC algorithm, its initialization, and convergence using experimental data measured from acoustically coupled-spaces. PMID:19739741
Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method
Directory of Open Access Journals (Sweden)
Gheorghe Grigoras
2013-09-01
Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically
Energy efficiency estimation of a steam powered LNG tanker using normal operating data
Directory of Open Access Journals (Sweden)
Sinha Rajendra Prasad
2016-01-01
Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is
Directory of Open Access Journals (Sweden)
José A. Adell
2009-01-01
Full Text Available We give efficient algorithms, as well as sharp estimates, to compute the Kolmogorov distance between the binomial and Poisson laws with the same mean λ. Such a distance is eventually attained at the integer part of λ+1/2−λ+1/4. The exact Kolmogorov distance for λ≤2−2 is also provided. The preceding results are obtained as a concrete application of a general method involving a differential calculus for linear operators represented by stochastic processes.
An Efficient Estimation of Distribution Algorithm for Job Shop Scheduling Problem
He, Xiao-Juan; Zeng, Jian-Chao; Xue, Song-Dong; Wang, Li-Fang
An estimation of distribution algorithm with probability model based on permutation information of neighboring operations for job shop scheduling problem was proposed. The probability model was given using frequency information of pair-wise operations neighboring. Then the structure of optimal individual was marked and the operations of optimal individual were partitioned to some independent sub-blocks. To avoid repeating search in same area and improve search speed, each sub-block was taken as a whole to be adjusted. Also, stochastic adjustment to the operations within each sub-block was introduced to enhance the local search ability. The experimental results show that the proposed algorithm is more robust and efficient.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Directory of Open Access Journals (Sweden)
Robert Aidoo
2012-06-01
Full Text Available The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain.A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency analyses were performed using the field data. There was a long chain of greater than three channels through which yams moved from the producer to the final consumer. Yam marketing was found to be a profitable venture for all the key players in the yam marketing chain.Net marketing margin of about GH¢15.52 (US$9.13 was obtained when the farmer himself sold 100tubers of yams in the market rather than at the farm gate.The net marketing margin obtained by wholesalers was estimated at GH¢27.39 per 100tubers of yam sold, which was equivalent to about 61% of the gross margin obtained.Net marketing margin for retailers was estimated at GH¢15.37, representing 61% of the gross margin obtained.A net marketing margin of GH¢33.91 was obtained for every 100tubers of yam transported across Ghana’s borders by cross-border traders. Generally, the study found out that net marketing margin was highest for cross-border yam traders, followed by wholesalers. Yam marketing activities among retailers, wholesalers and cross-border traders were found to be highly efficient with efficiency ratios in excess of 100%. However, yam marketing among producer-sellers was found to be inefficient with efficiency ratio of about 86%.The study recommended policies and strategies to be adopted by central and local government authorities to address key constraints such as poor road network, limited financial resources, poor storage facilities and high cost of transportation that serve as
Estimation of Ship-plume Ozone Production Efficiency: ITCT 2K2 Case Study
Kim, H.; Kim, Y.; Song, C.
2013-12-01
The Ozone Production Efficiency (OPE) of ship plume was evaluated in this study, based on ship-plume photochemical/dynamic model simulations and the ship-plume composition data measured during the ITCT 2K2 (Intercontinental Transport and Chemical Transformation 2002) aircraft campaign. The averaged instantaneous OPEs (OPEi ) estimated via the ship-plume photochemical/dynamic modeling for the ITCT 2K2 ship-plume ranged between 4.61 and 18.92, showing that the values vary with the extent of chemical evolution (or chemical stage) of the ship plume and the stability classes of the marine boundary layer (MBL). Together with OPEi, the equivalent OPEs (OPEe) for the entire ITCT 2K2 ship-plume were also estimated. The OPEe values varied between 9.73 (for the stable MBL) and 12.73 (for the moderately stable MBL), which agreed well with the OPEe of 12.85 estimated based on the ITCT 2K2 ship-plume observations. It was also found that both the model-simulated and observation-based OPEe inside the ship-plume were 0.29-0.38 times smaller than the OPEe calculated/measured outside the ITCT 2K2 ship-plume. Lower OPEs insides the ship plume were due to the high levels of NOx. Possible implications of this ship-plume OPE study in the global chemistry-transport modeling are also discussed in this study.
Estimation of a ship-plume ozone production efficiency: ITCT 2K2 case study
Kim, Hyun Soo; Kim, Yong Hoon; Song, Chul Han
2015-04-01
Ozone Production Efficiency (OPE) of ship plume was first evaluated in this study, based on ship-plume photochemical/dynamic model simulation and the ship-plume composition data measured during the ITCT 2K2 (Intercontinental Transport and Chemical Transformation2002) aircraft campaign. The averaged instantaneous OPEs (OPEi) estimated via the ship-plume photochemical/dynamic modeling for the ITCT 2K2 ship-plume ranged between 4.61 and 18.92, showing that the values vary with the extent of chemical evolution (or chemical stage) of the ship plume and the stability classes of the marine boundary layer (MBL). Together with OPEi, the equivalent OPEs (OPEe) for the entire ITCT 2K2 ship-plume were also estimated. The OPEe values varied between 9.73 (for the stable MBL) and 12.73 (for the moderately stable MBL), which agreed well with the OPEe of 12.85 estimated based on the ITCT 2K2 ship-plume observations. It was also found that both the model-simulated and observation-based OPEe inside the ship-plume were 0.29-0.38 times smaller than the OPEe calculated/measured outside the ITCT 2K2 ship-plume. Such low OPEs insides the ship plume were due to the high levels of NO and non-liner ship-plume photochemistry. Possible implications of this ship-plume OPE study in the global chemistry-transport modeling are also discussed.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Marigodov, V. K.
2011-01-01
It is shown a possibility of linguistic diagnosis for efficiency and noise immunity estimation of radio communication system. On a basis of direct experts questioning membership functions for one of the system parameters are built.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Mohammed Abo-Zahhad; Sabah M. Ahmed; Ahmed Zakaria
2012-01-01
This paper presents an efficient electrocardiogram (ECG) signals compression technique based on QRS detection, estimation, and 2D DWT coefficients thresholding. Firstly, the original ECG signal is preprocessed by detecting QRS complex, then the difference between the preprocessed ECG signal and the estimated QRS-complex waveform is estimated. 2D approaches utilize the fact that ECG signals generally show redundancy between adjacent beats and between adjacent samples. The error signal is cut a...
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Indian Academy of Sciences (India)
Nicolle V Sydney; Emygdio La Monteiro-Filho
2011-03-01
Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Flammia, Steven T; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has only a few non-zero eigenvalues, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We exhibit two complementary ways of making this intuition precise. On the one hand, we show that the sample complexity decreases with the rank of the density operator. In other words, fewer copies of the state need to be prepared in order to estimate a low-rank density matrix. On the other hand---and maybe more surprisingly---we prove that unknown low-rank states may be reconstructed using an incomplete set of measurement settings. The method does not require any a priori assumptions about the unknown state, uses only simple Pauli measurements, and can be efficiently and unconditionally certified. Our results extend earlier work on compressed tomography, building on ideas from compressed sensing and matrix completion. Instrumental to the improved analysis are new error bounds for compressed tomography, based on the ...
An Efficient Algorithm for Contact Angle Estimation in Molecular Dynamics Simulations
Directory of Open Access Journals (Sweden)
Sumith YD
2015-01-01
Full Text Available It is important to find contact angle for a liquid to understand its wetting properties, capillarity and surface interaction energy with a surface. The estimation of contact angle from Non Equilibrium Molecular Dynamics (NEMD, where we need to track the changes in contact angle over a period of time is challenging compared to the estimation from a single image from an experimental measurement. Often such molecular simulations involve finite number of molecules above some metallic or non-metallic substrates and coupled to a thermostat. The identification of profile of the droplet formed during this time will be difficult and computationally expensive to process as an image. In this paper a new algorithm is explained which can efficiently calculate time dependent contact angle from a NEMD simulation just by processing the molecular coordinates. The algorithm implements many simple yet accurate mathematical methods available, especially to remove the vapor molecules and noise data and thereby calculating the contact angle with more accuracy. To further demonstrate the capability of the algorithm a simulation study has been reported which compares the contact angle influence with different thermostats in the Molecular Dynamics (MD simulation of water over platinum surface.
Mökkönen, Harri; Jónsson, Hannes
2016-01-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates i...
Analytical estimates of efficiency of attractor neural networks with inborn connections
Directory of Open Access Journals (Sweden)
Solovyeva Ksenia
2016-01-01
Full Text Available The analysis is restricted to the features of neural networks endowed to the latter by the inborn (not learned connections. We study attractor neural networks in which for almost all operation time the activity resides in close vicinity of a relatively small number of attractor states. The number of the latter, M, is proportional to the number of neurons in the neural network, N, while the total number of the states in it is 2N. The unified procedure of growth/fabrication of neural networks with sets of all attractor states with dimensionality d=0 and d=1, based on model molecular markers, is studied in detail. The specificity of the networks (d=0 or d=1 depends on topology (i.e., the set of distances between elements which can be provided to the set of molecular markers by their physical nature. The neural networks parameters estimates and trade-offs for them in attractor neural networks are calculated analytically. The proposed mechanisms reveal simple and efficient ways of implementation in artificial as well as in natural neural networks of multiplexity, i.e. of using activity of single neurons in representation of multiple values of the variables, which are operated by the neural systems. It is discussed how the neuronal multiplexity provides efficient and reliable ways of performing functional operations in the neural systems.
Gallego, Francisco Javier; Stibig, Hans Jürgen
2013-06-01
Several projects dealing with land cover area estimation in large regions consider samples of sites to be analysed with high or very high resolution satellite images. This paper analyses the impact of stratification on the efficiency of sampling schemes of large-support units or clusters with a size between 5 km × 5 km and 30 km × 30 km. Cluster sampling schemes are compared with samples of unclustered points, both without and with stratification. The correlograms of land cover classes provide a useful tool to assess the sampling value of clusters in terms of variance; this sampling value is expressed as “equivalent number of points” of a cluster. We show that the “equivalent number of points” is generally higher for stratified cluster sampling than for non-stratified cluster sampling, whose values remain however moderate. When land cover data are acquired by photo-interpretation of tiles extracted from larger images, such as Landsat TM, a sampling plan based on a larger number of smaller clusters might be more efficient.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.
Directory of Open Access Journals (Sweden)
Carlborg Örjan
2007-11-01
Full Text Available Abstract Background Identity by descent (IBD matrix estimation is a central component in mapping of Quantitative Trait Loci (QTL using variance component models. A large number of algorithms have been developed for estimation of IBD between individuals in populations at discrete locations in the genome for use in genome scans to detect QTL affecting various traits of interest in experimental animal, human and agricultural pedigrees. Here, we propose a new approach to estimate IBD as continuous functions rather than as discrete values. Results Estimation of IBD functions improved the computational efficiency and memory usage in genome scanning for QTL. We have explored two approaches to obtain continuous marker-bracket IBD-functions. By re-implementing an existing and fast deterministic IBD-estimation method, we show that this approach results in IBD functions that produces the exact same IBD as the original algorithm, but with a greater than 2-fold improvement of the computational efficiency and a considerably lower memory requirement for storing the resulting genome-wide IBD. By developing a general IBD function approximation algorithm, we show that it is possible to estimate marker-bracket IBD functions from IBD matrices estimated at marker locations by any existing IBD estimation algorithm. The general algorithm provides approximations that lead to QTL variance component estimates that even in worst-case scenarios are very similar to the true values. The approach of storing IBD as polynomial IBD-function was also shown to reduce the amount of memory required in genome scans for QTL. Conclusion In addition to direct improvements in computational and memory efficiency, estimation of IBD-functions is a fundamental step needed to develop and implement new efficient optimization algorithms for high precision localization of QTL. Here, we discuss and test two approaches for estimating IBD functions based on existing IBD estimation algorithms. Our
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Maruta, Kazuki; Iwakuni, Tatsuhiko; Ohta, Atsushi; Arai, Takuto; Shirato, Yushi; Kurosaki, Satoshi; Iizuka, Masataka
2016-01-01
Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G). One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO) can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS) dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI) estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
International Nuclear Information System (INIS)
In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL)
Directory of Open Access Journals (Sweden)
Kazuki Maruta
2016-07-01
Full Text Available Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G. One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
Maruta, Kazuki; Iwakuni, Tatsuhiko; Ohta, Atsushi; Arai, Takuto; Shirato, Yushi; Kurosaki, Satoshi; Iizuka, Masataka
2016-01-01
Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G). One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO) can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS) dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI) estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity. PMID:27399715
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Directory of Open Access Journals (Sweden)
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
International Nuclear Information System (INIS)
High yielding crop like maize is very important for countries like Pakistan, which is third cereal crop after wheat and rice. Maize accounts for 4.8 percent of the total cropped area and 4.82 percent of the value of agricultural production. It is grown all over the country but major areas are Sahiwal, Okara and Faisalabad. Chiniot is one of the distinct agroecological domains of central Punjab for the maize cultivation, that's why this district was selected for the study and the technical efficiency of hybrid maize farmers was estimated. The primary data of 120 farmers, 40 farmers from each of the three tehsils of Chiniot were collected in the year 2011. Causes of low yields for various farmers than the others, while using the same input bundle were estimated. The managerial factors causing the inefficiency of production were also measured. The average technical efficiency was estimated to be 91 percent, while it was found to be 94.8, 92.7 and 90.8 for large, medium and small farmers, respectively. Stochastic frontier production model was used to measure technical efficiency. Statistical software Frontier 4.1 was used to analyse the data to generate inferences because the estimates of efficiency were produced as a direct output from package. It was concluded that the efficiency can be enhanced by covering the inefficiency from the environmental variables, farmers personal characteristics and farming conditions. (author)
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
International Nuclear Information System (INIS)
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Efficient estimation of the robustness region of biological models with oscillatory behavior.
Directory of Open Access Journals (Sweden)
Mochamad Apri
Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.
Efficient Metamodel Estimates of Nitrate Flux to Groundwater and Tile Drains (Invited)
Nolan, B. T.; Malone, R. W.
2013-12-01
We developed efficient metamodels to extend predictions by a complex, physically based model throughout the Corn Belt, USA. Metamodels are simplified representations of more complex models and exploit relations between model outputs and inputs. They retain some of the flexibility and process capability of complex models but typically have far fewer parameters. The reduced data requirements enable application at large spatial scales through geographic information systems (GIS). We developed two metamodels to upscale predictions of nitrate concentration and flux by the USDA's Root Zone Water Quality Model (RZWQM2), which was previously applied to corn-soybean sites in Nebraska, Iowa, and Maryland to estimate unsaturated zone N mass balances. RZWQM2 is a physically based agricultural systems model that simulates N cycling processes, the transport and fate of agricultural chemicals, and crop growth. The model provides a detailed accounting of N losses, additions, and transformations in the unsaturated zone and simulates water and N fluxes to artificial drains and groundwater. Although thorough accounting by RZWQM2 of key processes can yield more accurate predictions, upscaling is difficult because of the large number of parameters (> 200) that are unknown and difficult to estimate. The metamodels consist of artificial neural networks (ANNs), which are inherently flexible and free from linearity and distributional assumptions. The ANNs were trained on RZWQM2 predictions of nitrate in leachate and tile drainage below the root zone of crops. Therefore the metamodels represent an integrated approach to vulnerability assessment: whereas nitrate leaching poses a risk to groundwater, nitrate in drainage is routed directly to streams. The final metamodels consisted of 9 predictor variables and effectively related RZWQM2 outputs to the inputs (R2=0.986 and 0.911 for nitrate concentration and flux, respectively). Organic matter was the most sensitive variable in the nitrate
Estimating Forward Pricing Function: How Efficient is Indian Stock Index Futures Market?
Prasad Bhattacharaya; Harminder Singh
2006-01-01
This paper uses Indian stock futures data to explore unbiased expectations and efficient market hypothesis. Having experienced voluminous transactions within a short time span after its establishment, the Indian stock futures market provides an unparalleled case for exploring these issues involving expectation and efficiency. Besides analyzing market efficiency between cash and futures prices using cointegration and error correction frameworks, the efficiency hypothesis is also investigated a...
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Efficient estimation of time-mean states of ocean models using 4D-Var and implicit time-stepping
Terwisscha van Scheltinga, A.D.; Dijkstra, H.A.
2007-01-01
We propose an efficient method for estimating a time-mean state of an ocean model subject to given observations using implicit time-stepping. The new method uses (i) an implicit implementation of the 4D-Var method to fit the model trajectory to the observations, and (ii) a preprocessor which applies
Directory of Open Access Journals (Sweden)
Roliana Ibrahim
2012-09-01
Full Text Available Development effort is an undeniable part of the project management which considerably influences the success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project. Due to the special specifications, accurate estimation of effort in the software projects is a vital management activity that must be carefully done to avoid from the unforeseen results. However numerouseffort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and the attempts continue to improve the performance of estimation methods. Prior researches conducted in this area have focused on numerical and quantitative approaches and there are a few research works that investigate the root problems and issues behind the inaccurate effort estimation of software development effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in terms of effort estimation. The proposed framework includes various indicators which cover the critical issues in field of software development effort estimation. Since the capabilities and shortages of organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic approach in which the strengths and weaknesses of organizations in field of effort estimation are discovered
Horton, G.E.; Dubreuil, T.L.; Letcher, B.H.
2007-01-01
Our goal was to understand movement and its interaction with survival for populations of stream salmonids at long-term study sites in the northeastern United States by employing passive integrated transponder (PIT) tags and associated technology. Although our PIT tag antenna arrays spanned the stream channel (at most flows) and were continuously operated, we are aware that aspects of fish behavior, environmental characteristics, and electronic limitations influenced our ability to detect 100% of the emigration from our stream site. Therefore, we required antenna efficiency estimates to adjust observed emigration rates. We obtained such estimates by testing a full-scale physical model of our PIT tag antenna array in a laboratory setting. From the physical model, we developed a statistical model that we used to predict efficiency in the field. The factors most important for predicting efficiency were external radio frequency signal and tag type. For most sampling intervals, there was concordance between the predicted and observed efficiencies, which allowed us to estimate the true emigration rate for our field populations of tagged salmonids. One caveat is that the model's utility may depend on its ability to characterize external radio frequency signals accurately. Another important consideration is the trade-off between the volume of data necessary to model efficiency accurately and the difficulty of storing and manipulating large amounts of data.
Cerasoli, S.; Silva, J. M.; Carvalhais, N.; Correia, A.; Costa e Silva, F.; Pereira, J. S.
2013-12-01
The Light Use Efficiency (LUE) concept is usually applied to retrieve Gross Primary Productivity (GPP) estimates in models integrating spectral indexes, namely Normalized Difference Vegetation Index (NDVI) and Photochemical Reflectance Index (PRI), considered proxies of biophysical properties of vegetation. The integration of spectral measurements into LUE models can increase the robustness of GPP estimates by optimizing particular parameters of the model. NDVI and PRI are frequently obtained by broad band sensors on remote platforms at low spatial resolution (e.g. MODIS). In highly heterogeneous ecosystems such spectral information may not be representative of the dynamic response of the ecosystem to climate variables. In Mediterranean oak woodlands different plant functional types (PFT): trees canopy, shrubs and herbaceous layer, contribute to the overall Gross Primary Productivity (GPP). In situ spectral measurements can provide useful information on each PFT and its temporal variability. The objectives of this study were: i) to analyze the temporal variability of NDVI, PRI and others spectral indices for the three PFT, their response to climate variables and their relationship with biophysical properties of vegetation; ii) to optimize a LUE model integrating selected spectral indexes in which the contribution of each PFT to the overall GPP is estimated individually; iii) to compare the performance of disaggregated GPP estimates and lumped GPP estimates, evaluated against eddy covariance measurements. Ground measurements of vegetation reflectance were performed in a cork oak woodland located in Coruche, Portugal (39°8'N, 8°19'W) where carbon and water fluxes are continuously measured by eddy covariance. Between April 2011 and June 2013 reflectance measurements of the herbaceous layer, shrubs and trees canopy were acquired with a FieldSpec3 spectroradiometer (ASD Inc.) which provided data in the range of 350-2500nm. Measurements were repeated approximately on
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Institute of Scientific and Technical Information of China (English)
FAN JianQing; ZHOU Yong; CAI JianWen; CHEN Min
2009-01-01
Multivariate failure time data arise frequently in survival analysis. A commonly used technique is the working independence estimator for marginal hazard models. Two natural questions are how to improve the efficiency of the working independence estimator and how to identify the situations under which such an estimator has high statistical efficiency. In this paper, three weighted estimators are proposed based on three different optimal criteria in terms of the asymptotic covariance of weighted estimators. Simplified close-form solutions are found, which always outperform the working independence estimator. We also prove that the working independence estimator has high statistical efficiency,when asymptotic covariance of derivatives of partial log-likelihood functions is nearly exchangeable or diagonal. Simulations are conducted to compare the performance of the weighted estimator and working independence estimator. A data set from Busselton population health surveys is analyzed using the proposed estimators.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Updated estimation of energy efficiencies of U.S. petroleum refineries.
Energy Technology Data Exchange (ETDEWEB)
Palou-Rivera, I.; Wang, M. Q. (Energy Systems)
2010-12-08
Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels such as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.
Using Data Envelopment Analysis approach to estimate the health production efficiencies in China
Institute of Scientific and Technical Information of China (English)
ZHANG Ning; HU Angang; ZHENG Jinghai
2007-01-01
By using Data Envelopment Analysis approach,we treat the health production system in a certain province as a Decision Making Unit (DMU),identify its inputs and outputs,evaluate its technical efficiency in 1982,1990 and 2000 respectively,and further analyze the relationship between efficiency scores and social-environmental variables.This paper has found several interesting findings.Firstly,provinces on frontier in different year are different,but provinces far from the frontier keep unchanged.The average efficiency of health production has made a significant progress from 1982 to 2000.Secondly,all provinces in China can be divided into six categories in terms of health production outcome and efficiency,and each category has specific approach of improving health production efficiency.Thirdly,significant differences in health production efficiencies have been found among the eastern,middle and western regions in China,and among the eastern and middle regions.At last,there is significant positive relationship between population density and health production efficiency but negative relationship (not very significant) between the proportions of public health expenditure in total expense and efficiency.Maybe it is the result of inappropriate tendency of public expenditure.The relationship between abilities to pay for health care services and efficiency in urban areas is opposite to that in rural areas.One possible reason is the totally different income and public services treatments between rural and urban residents.Therefore,it is necessary to adjust health policies and service provisions which are specifically designed to different population groups.
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
Li, Ping
2008-01-01
Compressed Counting (CC) was recently proposed for approximating the $\\alpha$th frequency moments of data streams, for $0 1$. Previous studies used the standard algorithm based on {\\em symmetric stable random projections} to approximate the $\\alpha$th frequency moments and the entropy. Based on maximally-skewed stable random projections, Compressed Counting (CC) dramatically improves symmetric stable random projections, especially when $\\alpha\\approx 1$. This study applies CC to estimate the R\\'enyi entropy, the Tsallis entropy, and the Shannon entropy. Our experiments on some Web crawl data demonstrate significant improvements over previous studies. When estimating the frequency moments, the R\\'enyi entropy, and the Tsallis entropy, the improvements of CC, in terms of accuracy, appear to approach "infinity" as $\\alpha\\to1$. When estimating Shannon entropy using R\\'enyi entropy or Tsallis entropy, the improvements of CC, in terms of accuracy, are roughly 20- to 50-fold. When estimating the Shannon entropy fro...
The use of 32P and 15N to estimate fertilizer efficiency in oil palm
International Nuclear Information System (INIS)
Improving efficiency of use of fertilizers has attracted a great deal of interest on oil-palm estates because of increasing input costs. It is assumed that higher efficiency of use of fertilizers for estate crops, including oil palm, would result in significant savings and less environmental pollution. One way to enhance efficiency of use of fertilizers by oil palm is to apply them where the most active roots are located. Previous work has indicated the possibility of determining the most active roots of tea and chinchona by using 32P. In this experiment, 32P was again used, to determine the locations of the most active roots of oil palm trees
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Ogawa, Akira; Iwanami, Tetzuya; Shono, Hideki
1997-03-01
In order to estimate the cut-size Xc and the mechanically balanced particles in the axial flow cyclone with the slit-separation method, the tangential velocity distributions were calculated by the finite difference method. In comparison of the calculated results of the total collection efficiency with the experimental results, the calculated results showed a little higher than the experimental results due to the re-entrainment of the collected particles by turbulence. The effect of the slit for promoting the collection efficiency was not recognized.
Tetrastyryl-BODIPY-based dendritic light harvester and estimation of energy transfer efficiency.
Kostereli, Ziya; Ozdemir, Tugba; Buyukcakir, Onur; Akkaya, Engin U
2012-07-20
Versatile BODIPY dyes can be transformed into bright near-IR-emitting fluorophores by quadruple styryl substitutions. When clickable functionalities on the styryl moieties are inserted, an efficient synthesis of a light harvester is possible. In addition, clear spectral evidence is presented showing that, in dendritic light harvesters, calculations commonly based on quantum yield or emission lifetime changes of the donor are bound to yield large overestimations of energy transfer efficiency.
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Robert Aidoo; Fred Nimoh; John-Eudes Andivi Bakang; Kwasi Ohene-Yankyera; Simon Cudjoe Fialor; James Osei Mensah; Robert Clement Abaidoo
2012-01-01
The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain. A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders) in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta) through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency a...
Hutcheson, J P; Johnson, D E; Gerken, C L; Morgan, J B; Tatum, J D
1997-10-01
Six sets of four genetically identical Brangus steers (n = 24; X BW 409 kg) were used to determine the effect of different anabolic implants on visceral organ mass, chemical body composition, estimated tissue deposition, and energetic efficiency. Steers within a clone set were randomly assigned to one of the following implant treatments: C, no implant; E, estrogenic; A, androgenic, or AE, androgenic + estrogenic. Steers were slaughtered 112 d after implanting; visceral organs were weighed and final body composition determined by mechanical grinding and chemical analysis of the empty body. Mass of the empty gastrointestinal tract (GIT) was reduced approximately 9% (P .10) the efficiency of ME utilization. In general, estrogenic implants decreased GIT, androgenic implants increased liver, and all implants increased hide mass. Steers implanted with an AE combination had additive effects on protein deposition compared with either implant alone. The NEg requirements for body gain are estimated to be reduced 19% by estrogenic or combination implants. PMID:9331863
Directory of Open Access Journals (Sweden)
Dina Miftahutdinova
2015-02-01
Full Text Available Purpose: to give the estimation of efficiency of the use of the authorial training program in setup time for the women’s Ukraine rowing team representatives in the process of preparation to Olympic Games in London. Materials and Methods: 10 sportswomen of higher qualification, that are included to Ukraine rowing team, are participated in research. For the estimation of general and special physical preparedness the standard test and rowing ergometre Concept-2 are used. Results: the end of the preparatory period was observed significant improvement significant general and special physical fitness athletes surveyed, and their deviation from the model performance dropped to 5–7%. Conclusions: the high efficiency of the author training program for sportswomen of Ukrainian rowing team are testified and they became the Olympic champions in London.
Chaves, A S; Nascimento, M L; Tullio, R R; Rosa, A N; Alencar, M M; Lanna, D P
2015-10-01
The objective of this study was to examine the relationship of efficiency indices with performance, heart rate, oxygen consumption, blood parameters, and estimated heat production (EHP) in Nellore steers. Eighteen steers were individually lot-fed diets of 2.7 Mcal ME/kg DM for 84 d. Estimated heat production was determined using oxygen pulse (OP) methodology, in which heart rate (HR) was monitored for 4 consecutive days. Oxygen pulse was obtained by simultaneously measuring HR and oxygen consumption during a 10- to 15-min period. Efficiency traits studied were feed efficiency (G:F) and residual feed intake (RFI) obtained by regression of DMI in relation to ADG and midtest metabolic BW (RFI). Alternatively, RFI was also obtained based on equations reported by the NRC's to estimate individual requirement and DMI (RFI calculated by the NRC [1996] equation [RFI]). The slope of the regression equation and its significance was used to evaluate the effect of efficiency indices (RFI, RFI, or G:F) on the traits studied. A mixed model was used considering RFI, RFI, or G:F and pen type as fixed effects and initial age as a covariate. For HR and EHP variables, day was included as a random effect. There was no relationship between efficiency indices and back fat depth measured by ultrasound or daily HR and EHP ( > 0.05). Because G:F is obtained in relation to BW, the slope of G:F was positive and significant ( RFI and RFI ( RFI. Oxygen consumption per beat was not related to G:F; however, it was lower for RFI- and RFI-efficient steers, and consequently, oxygen volume (mL·min·kg) and OP (μL O·beat·kg) were also lower ( RFI and RFI ( > 0.05); however, G:F-efficient steers showed lower hematocrit and hemoglobin concentrations ( < 0.05). Differences in EHP between efficient and inefficient animals were not directly detected. Nevertheless, differences in oxygen consumption and OP were detected, indicating that the OP methodology may be useful to predict growth efficiency. PMID
Directory of Open Access Journals (Sweden)
D. O. Fuller
2012-11-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
MACHARIA, Harrison; Goos, Peter
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical ...
Directory of Open Access Journals (Sweden)
Bulkina N.V.
2011-03-01
Full Text Available As a result of the conducted research results of an estimation of efficiency of application of the general eradication are presented therapy and local therapy at sick of inflammatory periodontal diseases against a chronic gastritis. Authors notice a positive effect at application as pathogenetic therapy of balm for gums of "Asepta" that normalisation of level of hygiene of the oral cavity, proof remission of periodontal diseases against a pathology of a gastroenteric path allows to achieve
Barr, J. G.; Engel, V.; Fuentes, J. D.; Fuller, D. O.; Kwon, H.
2012-11-01
Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC) systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE) and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information
Aranha dos Santos, Valentin; Schmetterer, Leopold; Gröschl, Martin; Garhofer, Gerhard; Werkmeister, René M.
2016-03-01
Dry eye syndrome is a highly prevalent disease of the ocular surface characterized by an instability of the tear film. Traditional methods used for the evaluation of tear film stability are invasive or show limited repeatability. Here we propose a new noninvasive approach to measure tear film thickness using an efficient delay estimator and ultrahigh resolution spectral domain OCT. Silicon wafer phantoms with layers of known thickness and group index were used to validate the estimator-based thickness measurement. A theoretical analysis of the fundamental limit of the precision of the estimator is presented and the analytical expression of the Cramér-Rao lower bound (CRLB), which is the minimum variance that may be achieved by any unbiased estimator, is derived. The performance of the estimator against noise was investigated using simulations. We found that the proposed estimator reaches the CRLB associated with the OCT amplitude signal. The technique was applied in vivo in healthy subjects and dry eye patients. Series of tear film thickness maps were generated, allowing for the visualization of tear film dynamics. Our results show that the central tear film thickness precisely measured in vivo with a coefficient of variation of about 0.65% and that repeatable tear film dynamics can be observed. The presented method has the potential of being an alternative to breakup time measurements (BUT) and could be used in clinical setting to study patients with dry eye disease and monitor their treatments.
International Nuclear Information System (INIS)
Collection of hot electrons generated by the efficient absorption of light in metallic nanostructures, in contact with semiconductor substrates can provide a basis for the construction of solar energy-conversion devices. Herein, we evaluate theoretically the energy-conversion efficiency of systems that rely on internal photoemission processes at metal-semiconductor Schottky-barrier diodes. In this theory, the current-voltage characteristics are given by the internal photoemission yield as well as by the thermionic dark current over a varied-energy barrier height. The Fowler model, in all cases, predicts solar energy-conversion efficiencies of <1% for such systems. However, relaxation of the assumptions regarding constraints on the escape cone and momentum conservation at the interface yields solar energy-conversion efficiencies as high as 1%–10%, under some assumed (albeit optimistic) operating conditions. Under these conditions, the energy-conversion efficiency is mainly limited by the thermionic dark current, the distribution of hot electron energies, and hot-electron momentum considerations
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela;
2010-01-01
an unbiased counting frame on paraffin sections stained immunohistochemically for protein gene product 9.5; 4) electron microscopic estimation of the mean number of axon profiles contained in one nerve fiber profile; 5) estimation of the degree of tissue shrinkage of specimens in paraffin; and 6) calculation......Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...
Institute of Scientific and Technical Information of China (English)
QU Annie; XUE Lan
2009-01-01
@@ In the analysis of correlated data, it is ideal to capture the true dependence structure to increase efficiency of the estimation. However, for multivariate survival data, this is extremely challenge since the martingale residual is involved and often intractable. Fan et al. have made a significant contribution by giving a close-form formula for the optimal weights of the estimating functions such that the asymptotic variance of the estimator is minimized. Since minimizing the variance matrix is not an easy task, several strategies are proposed, such as minimizing the total variance.The most feasible one is to use the diagonal matrix entries as the weighting scheme. We congratulate them on this important work. In the following we discuss implementing of their method and relate our work to theirs.
Binary logistic regression to estimate household income efficiency. (south Darfur rural areas-Sudan
Directory of Open Access Journals (Sweden)
Sofian A. A. Saad
2016-03-01
Full Text Available The main objective behind this study is to find out the main factors that affects the efficiency of household income in Darfur rejoin. The statistical technique of the binary logistic regression has been used to test if there is a significant effect of fife binary explanatory variables against the response variable (income efficiency; sample of size 136 household head is gathered from the relevant population. The outcomes of the study showed that; there is a significant effect of the level of household expenditure on the efficiency of income, beside the size of household also has significant effect on the response variable, the remaining explanatory variables showed no significant effects, those are (household head education level, size of household head own agricultural and numbers of students at school.
Energy Technology Data Exchange (ETDEWEB)
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.
2010-04-14
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy
Case study in higher school: problems of application and efficiency estimation
Directory of Open Access Journals (Sweden)
Ekimova V.I.
2015-03-01
Full Text Available Case study takes leading positions in training specialists at higher schools in the majority of foreign countries and is regarded as the most efficient way of teaching students how to solve typical professional tasks. The article reviews the general principles and approaches to organization of case study sessions for students. The most interesting and informative resources as well as the most perspective formats of case studies are presented in the article. The review compiles the findings concerning the educational efficiency and developmental potential of this method. The article outlines the perspective of extending the areas of case study application in higher schools' educational process.
Direct and efficient stereological estimation of total cell quantities using electron microscopy
DEFF Research Database (Denmark)
Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2006-01-01
and local stereological probes through arbitrarily fixed points for estimation of total quantities inside cells are presented. The quantities comprise (total) number, length, surface area, volume or 3D spatial distribution for organelles as well as total amount of gold particles, various compounds...
Jiang, George J.; Sluis, Pieter J. van der
1999-01-01
While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using th
Directory of Open Access Journals (Sweden)
Toly Chen
2014-08-01
Full Text Available Cycle time management plays an important role in improving the performance of a wafer fabrication factory. It starts from the estimation of the cycle time of each job in the wafer fabrication factory. Although this topic has been widely investigated, several issues still need to be addressed, such as how to classify jobs suitable for the same estimation mechanism into the same group. In contrast, in most existing methods, jobs are classified according to their attributes. However, the differences between the attributes of two jobs may not be reflected on their cycle times. The bi-objective nature of classification and regression tree (CART makes it especially suitable for tackling this problem. However, in CART, the cycle times of jobs of a branch are estimated with the same value, which is far from accurate. For these reason, this study proposes a joint use of principal component analysis (PCA, CART, and back propagation network (BPN, in which PCA is applied to construct a series of linear combinations of original variables to form new variables that are as unrelated to each other as possible. According to the new variables, jobs are classified using CART before estimating their cycle times with BPNs. A real case was used to evaluate the effectiveness of the proposed methodology. The experimental results supported the superiority of the proposed methodology over some existing methods. In addition, the managerial implications of the proposed methodology are also discussed with an example.
Directory of Open Access Journals (Sweden)
Northcutt Sally L
2010-04-01
Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin
2013-01-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both, stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretiza- tion of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both, maximum likelihood methods and an improved Monte Carlo...
Park, Timothy A.; Loomis, John B.
1992-01-01
This paper empirically tested the three conditions identified by McConnell for equivalence of the linear utility difference model and the valuation function approach to dichotomous choice contingent valuation. Using a contingent valuation survey for deer hunting in California, two of the three conditions were violated. Even though the models are not simple linear transforms of each other for this survey, estimates of mean willingness to pay and their associated 95% confidence intervals around...
Popkov V.M.; Fomkin R.N.; Blyumberg B.I.
2013-01-01
Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF). Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by ...
Tamošiūnas, M.; Jakovels, D.; Lihačovs, A.; Kilikevičius, A.; Baltušnikas, J.; Kadikis, R.; Šatkauskas, S.
2014-10-01
Electroporation and ultrasound induced sonoporation has been showed to induce plasmid DNA transfection to the mice tibialis cranialis muscle. It offers new prospects for gene therapy and cancer treatment. However, numerous experimental data are still needed to deliver the plausible explanation of the mechanisms governing DNA electro- or sono-transfection, as well as to provide the updates on transfection protocols for transfection efficiency increase. In this study we aimed to apply non-invasive optical diagnostic methods for the real time evaluation of GFP transfection levels at the reduced costs for experimental apparatus and animal consumption. Our experimental set-up allowed monitoring of GFP levels in live mice tibialis cranialis muscle and provided the parameters for DNA transfection efficiency determination.
Estimation of Power Efficiency of Combined Heat Pumping Stations in Heat Power Supply Systems
I. I. Matsko
2010-01-01
The paper considers realization of heat pumping technologies advantages at heat power generation for heat supply needs on the basis of combining electric drive heat pumping units with water heating boilers as a part of a combined heat pumping station.The possibility to save non-renewable energy resources due to the combined heat pumping stations utilization instead of water heating boiler houses is shown in the paper.The calculation methodology for power efficiency for introduction of combine...
Massive MIMO Systems With Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits
Bjornson, Emil; Hoydis, Jakob; Kountouris, Marios; Debbah, Merouane
2014-01-01
The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little inter-user interference. Since these results rely on asymptotics, it is...
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2016-08-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Thornburg, Jonathan
2010-01-01
If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...
Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.
2016-09-01
Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.
International Nuclear Information System (INIS)
This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications
Efficient focusing scheme for transverse velocity estimation using cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
of the flow. Here a weakly focused transmit field was used along with a simple delay-sum beamformer. A modified method for performing the focusing by employing a special calculation of the delays is introduced, so that a focused emission can be used. The velocity estimation was studied through extensive...... simulations with Field II. A 64-elements, 5 MHz linear array was used. A parabolic velocity profile with a peak velocity of 0.5 m/s was considered for different angles between the flow and the ultrasound beam and for different emit foci. At 60 degrees the relative standard deviation was 0.58 % for a transmit...
Absolute efficiency estimation of photon-number-resolving detectors using twin beams
Worsley, A P; Lundeen, J S; Mosley, P J; Smith, B J; Puentes, G; Thomas-Peter, N; Walmsley, I A; 10.1364/OE.17.004397
2009-01-01
A nonclassical light source is used to demonstrate experimentally the absolute efficiency calibration of a photon-number-resolving detector. The photon-pair detector calibration method developed by Klyshko for single-photon detectors is generalized to take advantage of the higher dynamic range and additional information provided by photon-number-resolving detectors. This enables the use of brighter twin-beam sources including amplified pulse pumped sources, which increases the relevant signal and provides measurement redundancy, making the calibration more robust.
Kato, M.; Hachisu, I.
1999-01-01
We have calculated the mass accumulation efficiency during helium shell flashes to examine whether or not a carbon-oxygen white dwarf (C+O WD) grows up to the Chandrasekhar mass limit to ignite a Type Ia supernova explosion. It has been frequently argued that luminous super-soft X-ray sources and symbiotic stars are progenitors of SNe Ia. In such systems, a C+O WD accretes hydrogen-rich matter from a companion and burns hydrogen steadily on its surface. The WD develops a helium layer undernea...
Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong
2015-12-01
A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB.
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin; Noé, Frank
2013-04-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretization of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both maximum likelihood methods and an improved Monte Carlo sampling method for reversible transition matrices with fixed stationary distribution are given. The sampling approach is applied to a toy example as well as to simulations of the MR121-GSGS-W peptide, and is demonstrated to converge much more rapidly than a previous approach of Noé [J. Chem. Phys. 128, 244103 (2008), 10.1063/1.2916718].
Protein partition coefficients can be estimated efficiently by hybrid shortcut calculations.
Kress, Christian; Sadowski, Gabriele; Brandenbusch, Christoph
2016-09-10
The extraction of therapeutic proteins like monoclonal antibodies in aqueous two-phase systems (ATPS) is a suitable alternative to common cost intensive chromatographic purification steps within the downstream processing. Thereby the protein partitioning can be selectively changed using a displacement agent (additional salt) in order to allow for a successful purification of the target protein. Within this work a new shortcut strategy for the calculation of protein partition coefficients in polymer-salt ATPS is presented. The required protein-solute (phase-forming component, displacement agent) interactions are covered by the cross virial coefficient B23 measured by composition gradient multi-angle light scattering (CG-MALS). Using this shortcut calculation allows for an efficient determination of the partition coefficients of the target protein immunoglobulin G (IgG) and the impurity human serum albumin (HSA) within PEG-citrate and PEG-phosphate ATPS independently on the protein concentration. We demonstrate that the selection of a suitable displacement agent allowing for a selective purification of IgG from HSA is accessible by B23. Based on the determination of the protein-protein interactions via CG-MALS covered by the second osmotic virial coefficient B22 a further optimization of ATPS preventing protein precipitation is enabled. The results show that our approach contributes to an efficient downstream processing development. PMID:27388598
Directory of Open Access Journals (Sweden)
V. Yadav
2012-10-01
Full Text Available Addressing a variety of questions within Earth science disciplines entails the inference of the spatio-temporal distribution of parameters of interest based on observations of related quantities. Such estimation problems often represent inverse problems that are formulated as linear optimization problems. Computational limitations arise when the number of observations and/or the size of the discretized state space become large, especially if the inverse problem is formulated in a probabilistic framework and therefore aims to assess the uncertainty associated with the estimates. This work proposes two approaches to lower the computational costs and memory requirements for large linear space-time inverse problems, taking the Bayesian approach for estimating carbon dioxide (CO_{2} emissions and uptake (a.k.a. fluxes as a prototypical example. The first algorithm can be used to efficiently multiply two matrices, as long as one can be expressed as a Kronecker product of two smaller matrices, a condition that is typical when multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems. The second algorithm can be used to compute a posteriori uncertainties directly at aggregated spatio-temporal scales, which are the scales of most interest in many inverse problems. Both algorithms have significantly lower memory requirements and computational complexity relative to direct computation of the same quantities (O(n^{2.5} vs. O(n^{3}. For an examined benchmark problem, the two algorithms yielded a three and six order of magnitude increase in computational efficiency, respectively, relative to direct computation of the same quantities. Sample computer code is provided for assessing the computational and memory efficiency of the proposed algorithms for matrices of different dimensions.
An efficient Bandwidth Demand Estimation for Delay Reduction in IEEE 802.16j MMR WiMAX Networks
Directory of Open Access Journals (Sweden)
Fath Elrahman Ismael
2010-01-01
Full Text Available IEEE 802.16j MMR WiMAX networks allow the number of hops between the user andthe MMR-BS to be more than two hops. The standard bandwidth request procedure inWiMAX network introduces much delay to the user data and acknowledgement of theTCP packet that affects the performance and throughput of the network. In this paper,we propose a new scheduling scheme to reduce the bandwidth request delay in MMRnetworks. In this scheme, the MMR-BS allocates bandwidth to its direct subordinate RSswithout bandwidth request using Grey prediction algorithm to estimate the requiredbandwidth of each of its subordinate RS. Using this architecture, the access RS canallocate its subordinate MSs the required bandwidth without notification to the MMR-BS.Our scheduling architecture with efficient bandwidth demand estimation able to reducedelay significantly.
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert
Institute of Scientific and Technical Information of China (English)
AkiraOGAWA; TetzuyaIWANAMI; 等
1997-01-01
In order to estimate the cut-size Xc and the mechanically balanced particles in the axial flow cyclone with the slit-separation method,the tangential velocity distributions were calculated by the finite difference method.In comparison of the calculated results of the total collection effciency with the experimental results,the calculated results showed a little higher than the experimental results due to the re-entrainment of the collected particles by turbulence.The effect of the slit for promoting the collection efficiency was not recognized.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
This paper describes an efficient, low-latency systolic array architecture for full searches inblock-matching motion estimation. Conventional one-dimensional systolic array architecture is used to developa novel ring-like systolic array architecture through operator rescheduling considering the symmetry of the dataflow. High-latency delay due to stuffing of the array pipeline in the conventional architecture was eliminated.The new architecture delivers a higher throughput rate, achieves higher processor utilization, and haslow-power consumption. In addition, the minimum memory bandwidth of the conventional architecture ispreserved.``
Directory of Open Access Journals (Sweden)
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
Directory of Open Access Journals (Sweden)
Korniyenko S.V.
2011-12-01
Full Text Available One of priority directions in modern building is maintenance of energy efficiency of buildings and constructions. This problem can be realized by perfection of architectural, constructive and technical decisions. The particular interest is represented by an influence estimation of temperature and moisture mode of enclosing structures on a thermal performance and energy efficiency of buildings. The analysis of the data available in the literature has shown absence of effective calculation methods of temperature and moisture mode in edge zones of enclosing structures that complicates the decision of this problem.The purpose of the given work is an estimation of edge zones influence on a thermal performance and energy efficiency of buildings. The design procedure of energy parameters of a building for the heating period, realized in the computer program is developed. The given technique allows settling an invoice power inputs on heating, hot water supply, an electrical supply. Power inputs on heating include conduction heat-losses through an envelope of a building taking into account edge zones, ventilation heat-losses and leakage air (infiltration, internal household thermal emissions, heat-receipt from solar radiation. On an example it is shown that the account of edge zones raises conduction heat-losses through an envelope of a building on 37 %, the expense of thermal energy on building heating on 32 %, and the expense thermal and electric energy on 13 %. Consequently, thermal and moisture mode in edge zones of enclosing structures makes essential impact on building power consumption. Perfection of the constructive decision leads to decrease of transmission heat-losses through an envelope of a building on 29 %, the expense of thermal energy on building heating on 25 %, the expense of thermal and electric energy on 10 %. Thus, perfection of edge zones of enclosing structures has high potential of energy efficiency.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Efficient architecture for global elimination algorithm for H.264 motion estimation
Indian Academy of Sciences (India)
P Muralidhar; C B Ramarao
2016-01-01
This paper presents a fast block matching motion esti mation algorithm and its architecture. The proposed architecture is based on Global Elimination (GE) Algorithm, which uses pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. GE uses a preprocessing stage which can skip unnecessary Sum Absolute Difference (SAD) calculations by comparing minimum SAD with sub-sampled SAD (SSAD). In the second stage SAD is computed at roughly matched candidate positions. GE algorithm uses fixed sub-block sizes and shapes to compute SSAD values in preprocessing stage. Complexity of this GE algorithm is further reduced by adaptively changing the sub-block sizes depending on the macroblock features. In this paper adaptive Global Elimination algorithm has been implemented which reduces the computational complexity of motion estimation algorithm and thus resulted in low power dissipation. Proposed architecture achieved 60% less number of computations compared to existing full search architecture and 50% high throughput compared to existing fixed Global Elimination Architecture.
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Directory of Open Access Journals (Sweden)
Jeyarajan Thiyagalingam
2011-01-01
Full Text Available Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results.
Institute of Scientific and Technical Information of China (English)
F.Y.Wu; Y.H.Zhou; F.Tong; R.Kastner
2013-01-01
Underwater acoustic channels are recognized for being one of the most difficult propagation media due to considerable difficulties such as:multipath,ambient noise,time-frequency selective fading.The exploitation of sparsity contained in underwater acoustic channels provides a potential solution to improve the performance of underwater acoustic channel estimation.Compared with the classic l0 and l1 norm constraint LMS algorithms,the p-norm-like (lP) constraint LMS algorithm proposed in our previous investigation exhibits better sparsity exploitation performance at the presence of channel variations,as it enables the adaptability to the sparseness by tuning of p parameter.However,the decimal exponential calculation associated with the p-norm-like constraint LMS algorithm poses considerable limitations in practical application.In this paper,a simplified variant of the p-norm-like constraint LMS was proposed with the employment of Newton iteration method to approximate the decimal exponential calculation.Numerical simulations and the experimental results obtained in physical shallow water channels demonstrate the effectiveness of the proposed method compared to traditional norm constraint LMS algorithms.
Kang, Seungha; Denman, Stuart E; Morrison, Mark; Yu, Zhongtang; McSweeney, Chris S
2009-05-01
An extraction method was developed to recover high-quality RNA from rumen digesta and mouse feces for phylogenetic analysis of metabolically active members of the gut microbial community. Four extraction methods were tested on different amounts of the same samples and compared for efficiency of recovery and purity of RNA. Trizol extraction after bead beating produced a higher quantity and quality of RNA than a similar method using phenol/chloroform. Dissociation solution produced a 1.5- to 2-fold increase in RNA recovery compared with phosphate-buffered saline during the dissociation of microorganisms from rumen digesta or fecal particles. The identity of metabolically active bacteria in the samples was analyzed by sequencing 87 amplicons produced using bacteria-specific 16S rDNA primers, with cDNA synthesized from the extracted RNA as the template. Amplicons representing the major phyla encountered in the rumen (Firmicutes, 43.7%; Proteobacteria, 28.7%; Bacteroidetes, 25.3%; Spirochea, 1.1%, and Synergistes, 1.1%) were recovered, showing that development of the RNA extraction method enables RNA-based analysis of metabolically active bacterial groups from the rumen and other environments. Interestingly, in rumen samples, about 30% of the sequenced random 16S rRNA amplicons were related to the Proteobacteria, providing the first evidence that this group may have greater importance in rumen metabolism than previously attributed by DNA-based analysis.
Directory of Open Access Journals (Sweden)
Popkov V.M.
2013-03-01
Full Text Available Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF. Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by the method of HIFU have been performed. The general group of patients (n=102 has been subdivided by the method of casual distribution into two samples: group of patients with absent recurrent tumor and group of patients with the revealed recurrent tumor, by morphological research of biopsy material of residual prostate tissue after HIFU. The computer program has been used to study the signs of outcome of patients with prostate cancer. Results: Risk of development of recurrent prostate cancer has grown with the PSA level raise and its density. The index of positive biopsy columns <0,2 has shown the recurrence of prostate cancer in 17% cases while occurrence of prostate cancer in 59% cases has been determined by the index of 0,5 and higher. The tendency to obvious growth of number of relapses has been revealed by the sum of Glison raise with present perineural invasion. Cases of recurrent prostate cancer have been predominant in patients with lymphovascular invasions. In conclusion it has been worked out that the main signs of recurrent prostate cancer development may include: PSA, PSA density, the sum of Glison, lymphovascular invasion, invasion.
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
Qiu, Bingwen; Feng, Min; Tang, Zhenghong
2016-05-01
This study proposed a simple Smoother without any local adjustments based on Continuous Wavelet Transform (SCWT). And then it evaluated its performance together with other commonly applied techniques in phenological estimation. These noise reduction methods included Savitzky-Golay filter (SG), Double Logistic function (DL), Asymmetric Gaussian function (AG), Whittaker Smoother (WS) and Harmonic Analysis of Time-Series (HANTS). They were evaluated based on fidelity and smoothness, and their efficiencies in deriving phenological parameters through the inflexion point-based method with the 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) in 2013 in China. The following conclusions were drawn: (1) The SG method exhibited strong fidelity, but weak smoothness and spatial continuity. (2) The HANTS method had very robust smoothness but weak fidelity. (3) The AG and DL methods performed weakly for vegetation with more than one growth cycle (i.e., multiple crops). (4) The WS and SCWT smoothers outperformed others with combined considerations of fidelity and smoothness, and consistent phenological patterns (correlation coefficients greater than 0.8 except evergreen broadleaf forests (0.68)). (5) Compared with WS methods, the SCWT smoother was capable in preservation of real local minima and maxima with fewer inflexions. (6) Large discrepancy was examined from the estimated phenological dates with SG and HANTS methods, particularly in evergreen forests and multiple cropping regions (the absolute mean deviation rates were 6.2-17.5 days and correlation coefficients less than 0.34 for estimated start dates).
International Nuclear Information System (INIS)
A method is proposed for estimating the potential efficiency which can be achieved in an initially unbalanced multijunction solar cell by the mutual convergence of photogenerated currents: to extract this current from a relatively narrow band-gap cell and to add it to a relatively wide-gap cell. It is already known that the properties facilitating relative convergence are inherent to such objects as bound excitons, quantum dots, donor-acceptor pairs, and others located in relatively wide-gap cells. In fact, the proposed method is reduced to the problem of obtaining such a required light current-voltage (I–V) characteristic which corresponds to the equality of all photogenerated short-circuit currents. Two methods for obtaining the required light I–V characteristic are used. The first one is selection of the spectral composition of the radiation incident on the multijunction solar cell from an illuminator. The second method is a double shift of the dark I–V characteristic: a current shift Jg (common set photogenerated current) and a voltage shift (−JgRs), where Rs is the series resistance. For the light and dark I–V characteristics, a general analytical expression is derived, which considers the effect of so-called luminescence coupling in multijunction solar cells. The experimental I–V characteristics are compared with the calculated ones for a three-junction InGaP/GaAs/Ge solar cell with Rs = 0.019 Ω cm2 and a maximum factual efficiency of 36.9%. Its maximum potential efficiency is estimated as 41.2%
Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente
2003-11-01
Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected by shrinkage or expansion of the tissue. The optical fractionator technique is very efficient but requires relatively thick sections (e.g., > or =20 microm after coverslipping) and the unequivocal identification of labeled cells throughout the section thickness. We have adapted our protocol for Mac-1 immunohistochemical visualization of microglial cells in thick (70 microm) vibratome sections for stereological counting within the murine hippocampus, and we have compared the staining results with other selective microglial markers: the histochemical demonstration of nucleotide diphosphatase (NDPase) activity and the tomato lectin histochemistry. The protocol gives sections of high quality with a final mean section thickness of >20 microm (h=22.3 microm +/- 0.64 microm), and with excellent rendition of Mac-1+ microglia through the entire height of the section. The NDPase staining gives an excellent visualization of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics of the microglial cell population in the developing and in the injured, diseased and normal adult CNS.
International Nuclear Information System (INIS)
We estimate the environmental efficiency, reduction potential and marginal abatement cost of carbon dioxide (CO2) emissions from coal-fired power plants in China using a novel plant-level dataset derived from the first and second waves of the National Economic Survey, which were implemented in 2004 and 2008, respectively. The results indicate that there are large opportunities for CO2 emissions reduction in China's coal-fired power plants. Given that all power plants operate fully efficiently, China's CO2 emissions in 2004 and 2008 could have been reduced by 52% and 70%, respectively, accompanied by an expansion in electricity output. In other words, the opportunities for ‘double dividend’ exist. In 2004, the average marginal abatement cost of CO2 emissions for China's power plants was approximately 955 Yuan/ton, whereas in 2008, the cost increased to 1142 Yuan/ton. The empirical analyses show that subsidies from the government can reduce environmental inefficiency, but the subsidies significantly increase the shadow price of the power plants. Older and larger power plants have a lower environmental efficiency and marginal CO2 abatement cost. The ratio of coal consumption negatively affects the environmental efficiencies of power plants. -- Highlights: •A novel plant-level dataset derived from the National Economic Survey in China is used. •There are large opportunities for CO2 emissions reduction in China's coal-fired power plants. •Subsidies can reduce environmental inefficiency but increase shadow price
El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2014-07-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.
International Nuclear Information System (INIS)
Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Liu, Liangyun; Liu, Xinjie
2015-04-01
Passive measurement of solar-induced chlorophyll fluorescence (SIF) presents a new way for directly estimating the photosynthetic activities. In this study, one diurnal multi-angular spectral experiment and three independent diurnal flux experiments were carried out on winter wheat and maize to assess directional emission of SIF for estimating photosynthesis activities. Firstly, the Bi-Directional Fluorescence Distribution Function (BFDF) of SIF was investigated. A BFDF shape similar to the red Bi-Directional Reflectance Distribution Function (BRDF) was observed for the directional SIF emissions at 688 nm. Secondly, the relationship between the directional emission of canopy SIF and BRDF reflectance was examined, finding a strict linear correlation between SIF and reflectance at 688 nm, with an R2> 0.80 for all seven BRDF observations on winter wheat. Then, a BFDF correction model for the canopy SIF at 688 nm was presented by dividing by the canopy reflectance, and about 65.3% of the directional variation was successfully removed. Finally, the BFDF-corrected SIF signals were linked to photosynthetic activities, including gross ecosystem productivity (GEP) and photosynthetic light-use efficiency (LUE), and the determination coefficients between photosynthetic activities and the BFDF-corrected SIF increased for most cases. For GEP, the determination coefficients were slightly improved from 0.563, 0.382, and 0.613 (for raw SIF signals) to 0.592, 0.473, and 0.640 for all three diurnal experiments. For LUE, the determination coefficients increased from 0.393, and 0.358 to 0.517, and 0.528 for two experiments, while deceased slightly from 0.695 to 0.607 for one experiment. Therefore, according to the above preliminary results, the canopy SIF cannot be regarded as isotropic, and the directional emission SIF may be an important uncertainty in estimates of GEP and LUE.
Nobuhiko Fuwa; Christopher Edmonds; Pabitra Banik
2005-01-01
We focus on the impact of failing to control for differences in land types defined along toposequence on estimates of farm technical efficiency for small-scale rice farms in eastern India. In contrast with the existing literature, we find that those farms may be considerably more technically efficient than they appear from more aggregated analysis without such control. Farms planted with modern rice varieties are technically efficient. Furthermore, farms planted with traditional rice varietie...
Fuwa, Nobuhiko; Edmonds, Christopher M.; Banik, Pabitra
2005-01-01
We focus on the impact of failing to control for differences in land types defined along toposequence on estimates of farm technical efficiency for small-scale rice farms in eastern India. In contrast with the existing literature, we find that those farms may be considerably more technically efficient than they appear from more aggregated analysis without such control. Farms planted with modern rice varieties are technically efficient. Furthermore, farms planted with traditional rice varietie...
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Sinclair, Michael; Dufour, Pascal; Drew, Kristine; Myrskog, Stefan; Morgan, John Paul
2014-10-01
An electroluminescence test for a Concentrated PV system is presented with the objective of capturing high resolution pseudo-efficiency maps that highlight optical defects in the concentrator system. Key parameters of the experimental setup and imaging system are presented. Image processing is discussed, including comparison of experimental to nominal results and the quantitative estimation of optical efficiency. Efficiency estimates are validated using measurements under a collimated solar simulator and ray-tracing software. Further validation is performed by comparison of the electroluminescence technique to direct mapping of the optical efficiency. Initial results indicate the mean estimation error for Isc is -2.4% with a standard deviation is 6.9% and a combined measurement and analysis time of less than 5 seconds per optic. An extension of this approach to in-line quality control is discussed.
Directory of Open Access Journals (Sweden)
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Scartazza, Andrea; Vaccari, Francesco Primo; Bertolini, Teresa; Di Tommasi, Paul; Lauteri, Marco; Miglietta, Franco; Brugnoli, Enrico
2014-10-01
Water-use efficiency (WUE), thought to be a relevant trait for productivity and adaptation to water-limited environments, was estimated for three different ecosystems on the Mediterranean island of Pianosa: Mediterranean macchia (SMM), transition (S(TR)) and abandoned agricultural (SAA) ecosystems, representing a successional series. Three independent approaches were used to study WUE: eddy covariance measurements, C isotope composition of ecosystem respired CO2, and C isotope discrimination (Δ) of leaf material (dry matter and soluble sugars). Seasonal variations in C-water relations and energy fluxes, compared in S(MM) and in SAA, were primarily dependent on the specific composition of each plant community. WUE of gross primary productivity was higher in SMM than in SAA at the beginning of the dry season. Both structural and fast-turnover leaf material were, on average, more enriched in (13)C in S(MM) than SAA, indicating relatively higher stomatal control and WUE for the long-lived macchia species. This pattern corresponded to (13)C-enriched respired CO2 in SMM compared to the other ecosystems. Conversely, most of the annual herbaceous SAA species (terophytes) showed a drought-escaping strategy, with relatively high stomatal conductance and low WUE. An ecosystem-integrated Δ value was weighted for each ecosystem on the abundance of different life forms, classified according to Raunkiar's system. Agreement was found between ecosystem WUE calculated using eddy covariance and those estimated using integrated Δ approaches. Comparing the isotopic methods, Δ of leaf soluble sugars provided the most reliable proxy for short-term changes in photosynthetic discrimination and associated shifts in integrated canopy-level WUE along the successional series. PMID:25085444
Scartazza, Andrea; Vaccari, Francesco Primo; Bertolini, Teresa; Di Tommasi, Paul; Lauteri, Marco; Miglietta, Franco; Brugnoli, Enrico
2014-10-01
Water-use efficiency (WUE), thought to be a relevant trait for productivity and adaptation to water-limited environments, was estimated for three different ecosystems on the Mediterranean island of Pianosa: Mediterranean macchia (SMM), transition (S(TR)) and abandoned agricultural (SAA) ecosystems, representing a successional series. Three independent approaches were used to study WUE: eddy covariance measurements, C isotope composition of ecosystem respired CO2, and C isotope discrimination (Δ) of leaf material (dry matter and soluble sugars). Seasonal variations in C-water relations and energy fluxes, compared in S(MM) and in SAA, were primarily dependent on the specific composition of each plant community. WUE of gross primary productivity was higher in SMM than in SAA at the beginning of the dry season. Both structural and fast-turnover leaf material were, on average, more enriched in (13)C in S(MM) than SAA, indicating relatively higher stomatal control and WUE for the long-lived macchia species. This pattern corresponded to (13)C-enriched respired CO2 in SMM compared to the other ecosystems. Conversely, most of the annual herbaceous SAA species (terophytes) showed a drought-escaping strategy, with relatively high stomatal conductance and low WUE. An ecosystem-integrated Δ value was weighted for each ecosystem on the abundance of different life forms, classified according to Raunkiar's system. Agreement was found between ecosystem WUE calculated using eddy covariance and those estimated using integrated Δ approaches. Comparing the isotopic methods, Δ of leaf soluble sugars provided the most reliable proxy for short-term changes in photosynthetic discrimination and associated shifts in integrated canopy-level WUE along the successional series.
Madenjian, Charles P.; Rediske, Richard R.; O'Keefe, James P.; David, Solomon R.
2014-01-01
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination. PMID:25226430
Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying
2016-01-01
The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that
Directory of Open Access Journals (Sweden)
Y. Tramblay
2011-01-01
Full Text Available A good knowledge of rainfall is essential for hydrological operational purposes such as flood forecasting. The objective of this paper was to analyze, on a relatively large sample of flood events, how rainfall-runoff modeling using an event-based model can be sensitive to the use of spatial rainfall compared to mean areal rainfall over the watershed. This comparison was based not only on the model's efficiency in reproducing the flood events but also through the estimation of the initial conditions by the model, using different rainfall inputs. The initial conditions of soil moisture are indeed a key factor for flood modeling in the Mediterranean region. In order to provide a soil moisture index that could be related to the initial condition of the model, the soil moisture output of the Safran-Isba-Modcou (SIM model developed by Météo-France was used. This study was done in the Gardon catchment (545 km^{2} in South France, using uniform or spatial rainfall data derived from rain gauge and radar for 16 flood events. The event-based model considered combines the SCS runoff production model and the Lag and Route routing model. Results show that spatial rainfall increases the efficiency of the model. The advantage of using spatial rainfall is marked for some of the largest flood events. In addition, the relationship between the model's initial condition and the external predictor of soil moisture provided by the SIM model is better when using spatial rainfall, in particular when using spatial radar data with R^{2} values increasing from 0.61 to 0.72.
International Nuclear Information System (INIS)
Two independent methods of estimating gross ecosystem production (GEP) were compared over a period of 2 years at monthly integrals for a mixed forest of conifers and deciduous hardwoods at Harvard Forest in central Massachusetts. Continuous eddy flux measurements of net ecosystem exchange (NEE) provided one estimate of GEP by taking day to night temperature differences into account to estimate autotrophic and heterotrophic respiration. GEP was also estimated with a quantum efficiency model based on measurements of maximum quantum efficiency (Qmax), seasonal variation in canopy phenology and chlorophyll content, incident PAR, and the constraints of freezing temperatures and vapour pressure deficits on stomatal conductance. Quantum efficiency model estimates of GEP and those derived from eddy flux measurements compared well at monthly integrals over two consecutive years (R2 = 0–98). Remotely sensed data were acquired seasonally with an ultralight aircraft to provide a means of scaling the leaf area and leaf pigmentation changes that affected the light absorption of photosynthetically active radiation to larger areas. A linear correlation between chlorophyll concentrations in the upper canopy leaves of four hardwood species and their quantum efficiencies (R2 = 0–99) suggested that seasonal changes in quantum efficiency for the entire canopy can be quantified with remotely sensed indices of chlorophyll. Analysis of video data collected from the ultralight aircraft indicated that the fraction of conifer cover varied from < 7% near the instrument tower to about 25% for a larger sized area. At 25% conifer cover, the quantum efficiency model predicted an increase in the estimate of annual GEP of < 5% because unfavourable environmental conditions limited conifer photosynthesis in much of the non-growing season when hardwoods lacked leaves
Directory of Open Access Journals (Sweden)
Raymond K. DZIWORNU
2014-11-01
Full Text Available This paper applied the stochastic profit frontier model to estimate economic efficiency of 199 small-scale commercial broiler producers in the Greater Accra Region of Ghana. Farm-level data was obtained from the producers through a multi-stage sampling technique. Results indicate that broiler producers are not fully economically efficient. The mean economic efficiency was 69 percent, implying that opportunity exist for broiler producers to increase their economic efficiency level through better use of available resources. Age of producer, extension contact, market age of broiler and credit access were found to significantly influence economic efficiency in broiler production. Policy measures directed at these factors to enhance economic efficiency of broiler producers are recommendable.
Marshall, M.; Tu, K. P.
2015-12-01
Large-area crop yield models (LACMs) are commonly employed to address climate-driven changes in crop yield and inform policy makers concerned with climate change adaptation. Production efficiency models (PEMs), a class of LACMs that rely on the conservative response of carbon assimilation to incoming solar radiation absorbed by a crop contingent on environmental conditions, have increasingly been used over large areas with remote sensing spectral information to improve the spatial resolution of crop yield estimates and address important data gaps. Here, we present a new PEM that combines model principles from the remote sensing-based crop yield and evapotranspiration (ET) model literature. One of the major limitations of PEMs is that they are evaluated using data restricted in both space and time. To overcome this obstacle, we first validated the model using 2009-2014 eddy covariance flux tower Gross Primary Production data in a rice field in the Central Valley of California- a critical agro-ecosystem of the United States. This evaluation yielded a Willmot's D and mean absolute error of 0.81 and 5.24 g CO2/d, respectively, using CO2, leaf area, temperature, and moisture constraints from the MOD16 ET model, Priestley-Taylor ET model, and the Global Production Efficiency Model (GLOPEM). A Monte Carlo simulation revealed that the model was most sensitive to the Enhanced Vegetation Index (EVI) input, followed by Photosynthetically Active Radiation, vapor pressure deficit, and air temperature. The model will now be evaluated using 30 x 30m (Landsat resolution) biomass transects developed in 2011 and 2012 from spectroradiometric and other non-destructive in situ metrics for several cotton, maize, and rice fields across the Central Valley. Finally, the model will be driven by Daymet and MODIS data over the entire State of California and compared with county-level crop yield statistics. It is anticipated that the new model will facilitate agro-climatic decision-making in
Institute of Scientific and Technical Information of China (English)
KUK Anthony
2009-01-01
@@ The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on working correlation matrices, optimal weighting, composite likelihood and various variants have been proposed in the categorical data literature, the working independence estimator is still very much the prevalent estimator in multivariate survival data analysis.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
Yin, X.; Belay, D.; Putten, van der P.E.L.; Struik, P.C.
2014-01-01
Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (UCO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation
Bengel, F M; Permanetter, B; Ungerer, M; Nekolla, S; Schwaiger, M
2000-03-01
The clearance kinetics of carbon-11 acetate, assessed by positron emission tomography (PET), can be combined with measurements of ventricular function for non-invasive estimation of myocardial oxygen consumption and efficiency. In the present study, this approach was applied to gain further insights into alterations in the failing heart by comparison with results obtained in normals. We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A "stroke work index" (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a "work-metabolic index" (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11C-acetate derived from monoexponential fitting. In DCM patients, left ventricular ejection fraction was 19%+/-10% and end-diastolic volume was 92+/-28 ml/m2 (vs 64%+/-7% and 55+/-8 ml/m2 in normals, PSWI (1674+/-761 vs 4736+/-895 mmHg x ml/m2; P<0.001) and the WMI as an estimate of efficiency (2.98+/-1.30 vs 6.20+/-2.25 x 10(6) mmHg x ml/m2; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart
Directory of Open Access Journals (Sweden)
Larisa Vajenina
2013-10-01
Full Text Available The author views the method of analysis of th e hierarchies to assessing energy efficiency in the enterprises of the main transport of gas. The method allows to investigate the consumption of energy resources of equipment, consumption of resources in technological operations and the creation of favorable conditions, and to assess the state of accounting systems and the work organization to improve the efficiency of the energy supply use.
Hur, Jin; Lee, Tae-Hwan; Lee, Bo-Mi
2011-12-01
The spectroscopic characteristics and relative distribution of refractory dissolved organic matter (R-DOM) in sewage have been investigated using the influent and the effluent samples collected from 15 large-scale biological wastewater treatment plants (WWTPs). Correlation between the characteristics of the influent and the final removal efficiency was also examined. Enhancement of specific ultraviolet absorbance (SUVA) and a higher R-DOM distribution ratio were observed for the effluent DOM compared with the influent DOM. However, the use of conventional rather than advanced biological treatments did not appear to affect either the effluent DOM or the removal efficiency, and there was no statistical significant difference between the two. No consistent trend was observed in the changes in the synchronous fluorescence spectra of the DOM after biological treatment. Irrespective of the treatment option, the removal efficiency of DOM was greater when the influent DOM had a lower SUVA, reduced DOC-normalized humic substance-like fluorescence, and a lower R-DOM distribution. These results suggest that selected characteristics of the influent may provide an indication of DOM removal efficiency in WWTPs. For R-DOM removal efficiency, however, similar characteristics of the influent did not show a negative relationship, and even exhibited a slight positive correlation, suggesting that the presence of refractory organic carbon structures in the influent sewage may stimulate microbial activity and inhibit the production of R-DOM during biological treatment. PMID:22439572
Directory of Open Access Journals (Sweden)
Dawang Naanpoes Charles
2011-11-01
Full Text Available This study examines the Net Farm Income (NFI, profitability index and technical efficiency of artisanal fishing in five natural lakes in plateau state, central Nigeria, with a view to examine the level of exploitation of captured inland fisheries as a renewable resource in the country. Data were collected using questionnaire from 110 sample fishermen selected from Polmakat, Shimankar, Deben, Janta and Pandam lakes using multi-stage sampling technique and analysed using descriptive statistics, farm budgeting techniques (net farm income and stochastic frontier production function model. The study reveals a net farm income of ₦48, 734.57 and a profitability index of ₦7.67. The mean technical efficiency of 83% was obtained, indicating that the sample fishermen were relatively efficient in allocating their limited resources. The result of the analysis indicates that 72% of variation in the fishermen output was as a result of the presence of technical inefficiency effect in the fishery, showing a potential of about 17% chance for improvement in technical efficiency level. Some observable variables relating to socioeconomics characteristics such as extension contact, experience and educational status significantly explain variation in technical efficiency. Transformation for effective and sustainable fisheries exploitation will need the education of fishermen, extension education and redefinition of property rights.
Geel, C.; Versluis, W.; Snel, J.F.H.
1997-01-01
The relation between photosynthetic oxygen evolution and Photosystem II electron transport was investigated for the marine algae t Phaeodactylum tricornutum, Dunaliella tertiolecta, Tetraselmis sp., t Isochrysis sp. and t Rhodomonas sp.. The rate of Photosystem II electron transport was estimated fr
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
H. Macharia (Harrison); P.P. Goos (Peter)
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the
Gonthier, Gerard J.
2007-01-01
A graphical method that uses continuous water-level and barometric-pressure data was developed to estimate barometric efficiency. A plot of nearly continuous water level (on the y-axis), as a function of nearly continuous barometric pressure (on the x-axis), will plot as a line curved into a series of connected elliptical loops. Each loop represents a barometric-pressure fluctuation. The negative of the slope of the major axis of an elliptical loop will be the ratio of water-level change to barometric-pressure change, which is the sum of the barometric efficiency plus the error. The negative of the slope of the preferred orientation of many elliptical loops is an estimate of the barometric efficiency. The slope of the preferred orientation of many elliptical loops is approximately the median of the slopes of the major axes of the elliptical loops. If water-level change that is not caused by barometric-pressure change does not correlate with barometric-pressure change, the probability that the error will be greater than zero will be the same as the probability that it will be less than zero. As a result, the negative of the median of the slopes for many loops will be close to the barometric efficiency. The graphical method provided a rapid assessment of whether a well was affected by barometric-pressure change and also provided a rapid estimate of barometric efficiency. The graphical method was used to assess which wells at Air Force Plant 6, Marietta, Georgia, had water levels affected by barometric-pressure changes during a 2003 constant-discharge aquifer test. The graphical method was also used to estimate barometric efficiency. Barometric-efficiency estimates from the graphical method were compared to those of four other methods: average of ratios, median of ratios, Clark, and slope. The two methods (the graphical and median-of-ratios methods) that used the median values of water-level change divided by barometric-pressure change appeared to be most resistant to
International Nuclear Information System (INIS)
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with 11C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for 11C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%±10% and end-diastolic volume was 92±28 ml/m2 (vs 64%±7% and 55±8 ml/m2 in normals, P2; P6 mmHg x ml/m2; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
Cardot, Hervé; Zitt, Pierre-André
2011-01-01
With the progress of measurement apparatus and the development of automatic sensors it is not unusual anymore to get thousands of samples of observations taking values in high dimension spaces such as functional spaces. In such large samples of high dimensional data, outlying curves may not be uncommon and even a few individuals may corrupt simple statistical indicators such as the mean trajectory. We focus here on the estimation of the geometric median which is a direct generalization of the real median and has nice robustness properties. The geometric median being defined as the minimizer of a simple convex functional that is differentiable everywhere when the distribution has no atoms, it is possible to estimate it with online gradient algorithms. Such algorithms are very fast and can deal with large samples. Furthermore they also can be simply updated when the data arrive sequentially. We state the almost sure consistency and the L2 rates of convergence of the stochastic gradient estimator as well as the ...
Nicolas Rispail; Diego Rubiales
2015-01-01
Fusarium wilts are widespread diseases affecting most agricultural crops. In absence of efficient alternatives, sowing resistant cultivars is the preferred approach to control this disease. However, actual resistance sources are often overcome by new pathogenic races, forcing breeders to continuously search for novel resistance sources. Selection of resistant accessions, mainly based on the evaluation of symptoms at timely intervals, is highly time-consuming. Thus, we tested the potential of ...
Kozai, Toyoki
2013-01-01
Extensive research has recently been conducted on plant factory with artificial light, which is one type of closed plant production system (CPPS) consisting of a thermally insulated and airtight structure, a multi-tier system with lighting devices, air conditioners and fans, a CO2 supply unit, a nutrient solution supply unit, and an environment control unit. One of the research outcomes is the concept of resource use efficiency (RUE) of CPPS.This paper reviews the characteristics of the CPPS compared with those of the greenhouse, mainly from the viewpoint of RUE, which is defined as the ratio of the amount of the resource fixed or held in plants to the amount of the resource supplied to the CPPS.It is shown that the use efficiencies of water, CO2 and light energy are considerably higher in the CPPS than those in the greenhouse. On the other hand, there is much more room for improving the light and electric energy use efficiencies of CPPS. Challenging issues for CPPS and RUE are also discussed.
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the
Directory of Open Access Journals (Sweden)
О. М. Рева
1999-09-01
Full Text Available Substantiated is the possibility of applying cybernetic methods of information chains for analysis of efficiency of structural organization of dispatcher team. To investigate it we have composed and solved a system of 10th order linear equation (used are direct current information chains . It is proved that the general scheme must be decomposed, on the one hand, into a group of operative planning and management on the other hand. Worked out are the requirement to the group members
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Malm, William C.; Pitchford, Marc L.
Size distributions and resulting optical properties of sulfur aerosols were investigated at three national parks by a Davis Rotating-drum Universal-size-cut Monitoring (DRUM) impactor. Sulfur size distribution measurements for 88, 177, and 315 consecutive time periods were made at Grand Canyon National Park during January and February 1988, Meadview, AZ during July, August, and September 1992, and at Shenandoah National Park during summer, 1990, respectively. The DRUM impactor is designed to collect aerosols with an aerodynamic diameter between 0.07 and 15.0 μm in eight size ranges. Focused beam particle-induced X-ray emission (PIXE) analysis of the aerosol deposits produces a time history of size-resolved elemental composition of varied temporal resolution. As part of the quality assurance protocol, an interagency monitoring of protected visual environments (IMPROVE) channel A sampler collecting 0-2.5 μm diameter particles was operated simultaneously alongside the DRUM sampler. During these sampling periods, the average sulfur mass, interpreted as ammonium sulfate, is 0.49, 2.30, and 10.36 μg m -3 at Grand Canyon, Meadview, and Shenandoah, respectively. The five drum stages were "inverted" using the Twomey (1975) scheme to give 486 size distributions, each made up of 72 discreet pairs of d C/dlog( D) and diameter ( D). From these distributions mass mean diameters ( Dg), geometric standard deviations ( σg), and mass scattering efficiencies ( em)) were calculated. The geometric mass mean diameters in ascending order were 0.21 μm at Meadview, 0.32 μm at Grand Canyon, and 0.42 μm at Shenandoah corresponding σg were 2.1, 2.3, and 1.9. Mie theory mass scattering efficiencies calculated from d C/dlog( D) distributions for the three locations were 2.05, 2.59, and 3.81 m 2 g -1, respectively. At Shenandoah, mass scattering efficiencies approached five but only when the mass median diameters were approximately 0.4 μm and σg were about 1.5. σg near 1.5 were
Frolov, V M; Tereshkin, V A; Sotskaia, Ia A; Peresadin, N A; Kruglova, O V
2013-03-01
Efficiency of reamberin and cycloferon application combination at the patients with the heavy form of acute tonsillitis was investigated. It is set that cycloferon and reamberin application in the complex of treatment of the patients with this pathology is instrumental in normalization of the general state and feel of patients, liquidation of both commontoxic syndrome and local inflammatory displays in pharynx, and also normalization of the studied biochemical and immunological indexes. Application of cycloferon and reamberin provides the decline of "average molecules" and malon dialdehyde level to norm, that testifies about liquidation endogenous "metabolic" intoxication syndrome, and also instrumental in normalization of phagocytes activity of monocytes indexes, that describe normalize operating of the indicated preparation on the macrophage phagocytes system. PMID:24605622
Directory of Open Access Journals (Sweden)
Irina N. Sorochinskaya
2012-11-01
Full Text Available Cardiovascular diseases are major reasons for population mortality in majority of countries, including Russia. Metabolic syndrome is considered to be one of the main pathologic states, leading to enhancement of atherogenesis, ischemic heart diseases and cerebrovascular diseases. Physical methods, including resort treatment play great role in metabolic syndrome prevention and treatment. Climate therapy depends on resort climate and season and is a major component of resort treatment. Psychological testing showed that combined resort treatment, using climate therapy of patients with stable effort angina at Sochi Health-resort is more efficient in autumn and of patients with metabolic syndrome in summer. The findings have been confirmed by clinic-functional indicators.
DeVries, R. J.; Hann, D. A.; Schramm, H.L., Jr.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
Estimating the efficiency of P/V systems under a changing climate - the case study of Greece.
Grillakis, Manolis; Panagea, Ioanna; Koutroulis, Aristeidis; Tsanis, Ioannis
2014-05-01
The effect of climate change on P/V output is studied for the region of Greece. Solar radiation and temperature data from 9 RCMs of ENSEMBLES EU FP6 project are used to estimate the effect of these two parameters on the future P/V systems output over Greece. Examining the relative contributions of temperature and irradiance, a significant reduction due to the temperature increase is projected which is however outweighed by the irradiance increase, resulting an overall output increase in photovoltaic systems. Nonetheless, in some cases the temperature increase is too large to be compensated by the increase irradiance resulting reduction of PV output up to 3. This is projected after 2050s for the eastern parts of the Greek mainland, Aegean islands and some areas in Crete. Results show that the PV output is projected to have an increasing trend in all regions of Greece until 2050, and a steeper increase trend further until 2100. Moreover, high resolution topographic information was combined to the PV output results, producing high resolution of favorability for future PV systems installation.
Chang, Yin-Jung; Shih, Ko-Han
2016-05-01
Internal photoemission (IPE) across an n-type Schottky junction due to standard AM1.5G solar illumination is quantified with practical considerations for Cu, Ag, and Al under direct and fully nondirect transitions, all in the context of the constant matrix element approximation. Under direct transitions, photoemitted electrons from d bands dominate the photocurrent and exhibit a strong dependence on the barrier energy ΦB but are less sensitive to the change in the metal thickness. Photocurrent is shown to be nearly completely contributed by s-state electrons in the fully nondirect approximation that offers nearly identical results as in the direct transition for metals having a free-electron-like band structure. Compared with noble metals, Al-based IPE has the highest quantum yield up to about 5.4% at ΦB = 0.5 eV and a maximum power conversion efficiency of approximately 0.31% due mainly to its relatively uniform and wide Pexc energy spectral width. Metals (e.g., Ag) with a larger interband absorption edge are shown to outperform those with shallower d-bands (e.g., Cu and Au).
Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.
2012-10-01
One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.
Rispail, Nicolas; Rubiales, Diego
2015-01-01
Fusarium wilts are widespread diseases affecting most agricultural crops. In absence of efficient alternatives, sowing resistant cultivars is the preferred approach to control this disease. However, actual resistance sources are often overcome by new pathogenic races, forcing breeders to continuously search for novel resistance sources. Selection of resistant accessions, mainly based on the evaluation of symptoms at timely intervals, is highly time-consuming. Thus, we tested the potential of an infra-red imaging system in plant breeding to speed up this process. For this, we monitored the changes in surface leaf temperature upon infection by F. oxysporum f. sp. pisi in several pea accessions with contrasting response to Fusarium wilt under a controlled environment. Using a portable infra-red imaging system we detected a significant temperature increase of at least 0.5 °C after 10 days post-inoculation in the susceptible accessions, while the resistant accession temperature remained at control level. The increase in leaf temperature at 10 days post-inoculation was positively correlated with the AUDPC calculated over a 30 days period. Thus, this approach allowed the early discrimination between resistant and susceptible accessions. As such, applying infra-red imaging system in breeding for Fusarium wilt resistance would contribute to considerably shorten the process of selection of novel resistant sources.
Zou, C X; Lively, F O; Wylie, A R G; Yan, T
2016-04-01
Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; Penergy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; Penergy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems.
Zou, C X; Lively, F O; Wylie, A R G; Yan, T
2016-04-01
Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; P<0.001) indicated values for net energy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; P<0.001). There were positive linear relationships between N intake and N outputs in manure, and manure N accounted for 0.923 of the N intake. The present results provide approaches to predict maintenance energy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems. PMID:26593693
Directory of Open Access Journals (Sweden)
John W. Jones
2015-09-01
Full Text Available The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE, is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral
Ionkin, I. L.; Ragutkin, A. V.; Luning, B.; Zaichenko, M. N.
2016-06-01
For enhancement of the natural gas utilization efficiency in boilers, condensation heat utilizers of low-potential heat, which are constructed based on a contact heat exchanger, can be applied. A schematic of the contact heat exchanger with a humidifier for preheating and humidifying of air supplied in the boiler for combustion is given. Additional low-potential heat in this scheme is utilized for heating of the return delivery water supplied from a heating system. Preheating and humidifying of air supplied for combustion make it possible to use the condensation utilizer for heating of a heat-transfer agent to temperature exceeding the dewpoint temperature of water vapors contained in combustion products. The decision to mount the condensation heat utilizer on the boiler was taken based on the preliminary estimation of the additionally obtained heat. The operation efficiency of the condensation heat utilizer is determined by its structure and operation conditions of the boiler and the heating system. The software was developed for the thermal design of the condensation heat utilizer equipped by the humidifier. Computation investigations of its operation are carried out as a function of various operation parameters of the boiler and the heating system (temperature of the return delivery water and smoke fumes, air excess, air temperature at the inlet and outlet of the condensation heat utilizer, heating and humidifying of air in the humidifier, and portion of the circulating water). The heat recuperation efficiency is estimated for various operation conditions of the boiler and the condensation heat utilizer. Recommendations on the most effective application of the condensation heat utilizer are developed.
Phesatcha, Burarat; Wanapat, Metha; Phesatcha, Kampanat; Ampapon, Thiwakorn; Kang, Sungchhang
2016-10-01
Four rumen-fistulated dairy steers, 3 years old with 180 ± 15 kg body weight (BW), were randomly assigned according to a 4 × 4 Latin square design to investigate on the effect of Flemingia macrophylla hay meal (FMH) and cassava hay meal (CH) supplementation on rumen fermentation efficiency and estimated methane production. The treatments were as follows: T1 = non-supplement, T2 = CH supplementation at 150 g/head/day, T3 = FMH supplementation at 150 g/head/day, and T4 = CH + FMH supplementation at 75 and 75 g/head/day. All steers were fed rice straw ad libitum and concentrate was offered at 0.5 % of BW. Results revealed that supplementation of CH and/or FMH did not affect on feed intake (P > 0.05) while digestibility of crude protein and neutral detergent fiber were increased especially in steers receiving FMH and CH+FMH (P methane production were decreased by dietary treatments. Protozoa and fungi population were not affected by dietary supplement while viable bacteria count increased in steers receiving FMH. Supplementation of FMH and/or FMH+CH increased microbial crude protein and efficiency of microbial nitrogen supply. This study concluded FMH (150 g/head/day) and/or CH+FMH (75 and 75 g/head/day) supplementation could be used as a rumen enhancer for increasing nutrient digestibility, rumen fermentation efficiency, and microbial protein synthesis while decreasing estimated methane production without adverse effect on voluntary feed intake of dairy steers fed rice straw.
Directory of Open Access Journals (Sweden)
Constantin E Uhlig
Full Text Available AIMS: To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts. METHODS: Invitations to participate in a survey were distributed via e-mail to employees of two university hospitals (E1 and E2 and to members of a medical association (E3, as a link placed in a special text on the municipal homepage regularly read by the administrative employees of two cities (H1 and H2, and paper-based to workers at an automobile enterprise (P1 and college (P2 and senior (P3 students. The main parameters analyzed included the numbers of invited and actual participants, and the time and cost to complete the survey. Statistical analysis was descriptive, except for the Kruskal-Wallis-H-test, which was used to compare the three recruitment methods. Cost efficiencies were compared and extrapolated to different-sized cohorts. RESULTS: The ratios of completely answered questionnaires to distributed questionnaires were between 81.5% (E1 and 97.4% (P2. Between 6.4% (P1 and 57.0% (P2 of the invited participants completely answered the questionnaires. The costs per completely answered questionnaire were $0.57-$1.41 (E1-3, $1.70 and $0.80 for H1 and H2, respectively, and $3.36-$4.21 (P1-3. Based on our results, electronic surveys with 10, 20, 30, or 42 questions would be estimated to be most cost (and time efficient if more than 101.6-225.9 (128.2-391.7, 139.8-229.2 (93.8-193.6, 165.8-230.6 (68.7-115.7, or 188.2-231.5 (44.4-72.7 participants were required, respectively. CONCLUSIONS: The study efficiency depended on the technical modalities of the survey methods and engagement of the participants. Depending on our study design, our results suggest that in similar projects that will certainly have more than two to three hundred required participants, the most efficient way of conducting a questionnaire-based survey is likely via the Internet with a digital questionnaire
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Estimation of efficiency project management
Directory of Open Access Journals (Sweden)
Novotorov Vladimir Yurevich
2011-03-01
Full Text Available In modern conditions, the effectiveness of the enterprises all in a greater degree depends on methods of management and business dealing forms. The organizations should choose the most effective for themselves strategy of management taking into account the existing legislation, concrete conditions of activity, financial and economic, investment potential and development strategy. Introduction of common system of planning and realization of strategy of the organization, it will allow to provide even development and long-term social and economic growth of the companies.
di Sarra, Alcide; Fuà, Daniele; Meloni, Daniela
2013-04-01
This study is based on measurements made at ENEA Station for Climate Observations (35.52° N, 12.63° E, 50 m asl) on the island of Lampedusa, in the Southern part of the Central Mediterranean. A quasi periodic oscillation of aerosol optical depth, column water vapour, shortwave (SW) and photosynthetic active radiation (PAR) is observed to occur during the morning of 7 September 2005. The quasi-periodic wave is present from about 6 to 10 UT, with solar zenith angles (SZA) varying between 77.5° and 37.2° . In this period the aerosol optical depth at 500 nm, ?, varies between 0.29 and 0.41; the column water vapour, cwv, varies between 2.4 and 2.8 cm. The oscillations of ? and cwv are in phase, while the modulation of the downward surface irradiances is in opposition of phase with respect to ? and cwv. The period of the oscillation is about 13 min. The oscillation is attributed to the propagation of a gravity wave which modulates the structure of the planetary boundary layer. The measured aerosol optical properties are typical of cases dominated by Saharan dust, with the Ångström exponent comprised between 0.5 and 0.6. The backtrajectory analysis for that day shows that airmasses overpass Northern Libya (trajectories arriving below 2000 m), Tunisia and Northern Algeria (trajectories arriving above 2000 m), carrying Saharan dust particles to Lampedusa. The combined modulation of downward irradiance, water vapour column, and aerosol optical depth is used to estimate the aerosol effect on the irradiance. From the irradiance-optical depth relation, the aerosol surface direct forcing efficiency (FE) is derived, under the assumption that during the measurement interval the aerosol microphysical properties do not appreciably change. As a first step, all SW irradiances are reported to the same cwv content (2.6 cm), by using radiative transfer model calculations. Reference curves describing the downward SW and PAR irradiances are constructed by using measurements obtained
International Nuclear Information System (INIS)
China's annual crude steel production in 2010 was 638.7 Mt accounting for nearly half of the world's annual crude steel production in the same year. Around 461 TWh of electricity and 14,872 PJ of fuel were consumed to produce this quantity of steel. We identified and analyzed 23 energy efficiency technologies and measures applicable to the processes in China's iron and steel industry. Using a bottom-up electricity CSC (Conservation Supply Curve) model, the cumulative cost-effective electricity savings potential for the Chinese iron and steel industry for 2010–2030 is estimated to be 251 TWh, and the total technical electricity saving potential is 416 TWh. The CO2 emissions reduction associated with cost-effective electricity savings is 139 Mt CO2 and the CO2 emission reduction associated with technical electricity saving potential is 237 Mt CO2. The FCSC (Fuel CSC) model for the Chinese iron and steel industry shows cumulative cost-effective fuel savings potential of 11,999 PJ, and the total technical fuel saving potential is 12,139. The CO2 emissions reduction associated with cost-effective and technical fuel savings is 1191 Mt CO2 and 1205 Mt CO2, respectively. In addition, a sensitivity analysis with respect to the discount rate used is conducted. - Highlights: ► Estimation of energy saving potential in the entire Chinese steel industry. ► Development of the bottom-up technology-rich Conservation Supply Curve models. ► Discussion of different approaches for developing Conservation Supply Curves. ► Primary energy saving over 20 years equal to 72% of primary energy of Latin America
Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François
2016-10-01
Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance.
Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François
2016-10-01
Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance. PMID:27258481
Magalov, Zaur; Shitzer, Avraham; Degani, David
2016-10-01
This study presents an efficient, fast and accurate method for estimating the two-dimensional temperature distributions around multiple cryo-surgical probes. The identical probes are inserted into the same depth and are operated simultaneously and uniformly. The first step in this method involves numerical derivation of the temporal performance data of a single probe, embedded in a semi-infinite, tissue-like medium. The results of this derivation are approximated by algebraic expressions that form the basis for computing the temperature distributions of multiple embedded probes by combining the data of a single probe. Comparison of isothermal contours derived by this method to those computed numerically for a variety of geometrical cases, up to 15 inserted probes and 2-10 min times of operation, yielded excellent results. Since this technique obviates the solution of the differential equations of multiple probes, the computational time required for a particular case is several orders of magnitude shorter than that needed for obtaining the full numerical solution. Blood perfusion and metabolic heat generation rates are demonstrated to inhibit the advancement of isothermal fronts. Application of this method will significantly shorten computational times without compromising the accuracy of the results. It may also facilitate expeditious consideration of the advantages of different modes of operation and the number of inserted probes at the early design stage. PMID:26963943
Dubrovskaya, Ekaterina; Turkovskaya, Olga
2010-05-01
Estimation of the efficiency of hydrocarbon mineralization in soil by measuring CO2-emission and variations in the isotope composition of carbon dioxide E. Dubrovskaya1, O. Turkovskaya1, A. Tiunov2, N. Pozdnyakova1, A. Muratova1 1 - Institute of Biochemistry and Physiology of Plants and Microorganisms, RAS, Saratov, 2 - A.N. Severtsov Institute of Ecology and Evolution, RAS, Moscow, Russian Federation Hydrocarbon mineralization in soil undergoing phytoremediation was investigated in a laboratory experiment by estimating the variation in the 13С/12С ratio in the respired СО2. Hexadecane (HD) was used as a model hydrocarbon pollutant. The polluted soil was planted with winter rye (Secale cereale) inoculated with Azospirillum brasilense strain SR80, which combines the abilities to promote plant growth and to degrade oil hydrocarbon. Each vegetated treatment was accompanied with a corresponding nonvegetated one, and uncontaminated treatments were used as controls. Emission of carbon dioxide, its isotopic composition, and the residual concentration of HD in the soil were examined after two and four weeks. At the beginning of the experiment, the CO2-emission level was higher in the uncontaminated than in the contaminated soil. After two weeks, the quantity of emitted carbon dioxide decreased by about three times and did not change significantly in all uncontaminated treatments. The presence of HD in the soil initially increased CO2 emission, but later the respiration was reduced. During the first two weeks, nonvegetated soil had the highest CO2-emission level. Subsequently, the maximum increase in respiration was recorded in the vegetated contaminated treatments. The isotope composition of plant material determines the isotope composition of soil. The soil used in our experiment had an isotopic signature typical of soils formed by C3 plants (δ13C,-22.4‰). Generally, there was no significant fractionation of the carbon isotopes of the substrates metabolized by the
Sannigrahi, Srikanta; Sen, Somnath; Paul, Saikat
2016-04-01
Net Primary Production (NPP) of mangrove ecosystem and its capacity to sequester carbon from the atmosphere may be used to quantify the regulatory ecosystem services. Three major group of parameters has been set up as BioClimatic Parameters (BCP): (Photosynthetically Active Radiation (PAR), Absorbed PAR (APAR), Fraction of PAR (FPAR), Photochemical Reflectance Index (PRI), Light Use Efficiency (LUE)), BioPhysical Parameters (BPP) :(Normalize Difference Vegetation Index (NDVI), scaled NDVI, Enhanced Vegetation Index (EVI), scaled EVI, Optimised and Modified Soil Adjusted Vegetation Index (OSAVI, MSAVI), Leaf Area Index (LAI)), and Environmental Limiting Parameters (ELP) (Temperature Stress (TS), Land Surface Water Index (LSWI), Normalize Soil Water Index (NSWI), Water Stress Scalar (WS), Inversed WS (iWS) Land Surface Temperature (LST), scaled LST, Vapor Pressure Deficit (VPD), scaled VPD, and Soil Water Deficit Index (SWDI)). Several LUE models namely Carnegie Ames Stanford Approach (CASA), Eddy Covariance - LUE (EC-LUE), Global Production Efficiency Model (GloPEM), Vegetation Photosynthesis Model (VPM), MOD NPP model, Temperature and Greenness Model (TG), Greenness and Radiation model (GR) and MOD17 was adopted in this study to assess the spatiotemporal nature of carbon fluxes. Above and Below Ground Biomass (AGB & BGB) was calculated using field based estimation of OSAVI and NDVI. Microclimatic zonation has been set up to assess the impact of coastal climate on environmental limiting factors. MODerate Resolution Imaging Spectroradiometer (MODIS) based yearly Gross Primary Production (GPP) and NPP product MOD17 was also tested with LUE based results with standard model validation statistics: Root Mean Square of Error (RMSE), Mean Absolute Error (MEA), Bias, Coefficient of Variation (CV) and Coefficient of Determination (R2). The performance of CASA NPP was tested with the ground based NPP with R2 = 0.89 RMSE = 3.28 P = 0.01. Among the all adopted models, EC
Fu, Dongjie; Chen, Baozhang; Zhang, Lifu
2015-04-01
The light-use efficiency (LUE) is one of critical parameters in the terrestrial ecosystem production studies. However, it is still a challenge how to up-scale LUE from canopy to the landscape/regional scales. One potential solution is to use automated multi-angle tower-based remote sensing platforms, which observe canopy reflectance with high spatial, temporal, spectral and angle resolution. Although some published paper on the LUE in boreal and temperate forests had used continuous multi-angle measurements of the surface reflectance, lack of study in literature investigated the vegetation physiological parameters of cropland using the surface reflectance with high spatio-temporal and high spectral data with multiple angles. To improve our understanding of physiological status of cropland, the maize within the footprint of the Daman Superstation flux tower site of Heihe Watershed Allied Telemetry Experiment Research (HiWATER) was employed in this study. Based on the observed reflectance and flux data, a Bidirectional Reflectance Distribution Function (BRDF) of vegetation index (Photochemical Reflectance Index, PRI and Vegetation Index using the Universal Pattern Decomposition method, VIUPD) at continuous time series was established by integrating of a semi-empirical kernel-driven BRDF model (RossThick-LiSparse), a footprint model (the Simple Analytical Footprint model on Eulerian coordinates for scalar Flux, SAFE-f) and a LUE model. Besides, based on the sky-condition (direct/diffused radiation) data, the relationships between the vegetation index (PRI and VIUPD) and sunlit/shaded LUE under corresponding sky conditions were established. Taking maize field as an example, the measurements were obtained during June to August, 2012. The relationships between PRI and LUE for sunlit and shaded leaves were: PRIsu=0.06339×log(LUEsu) + 0.04882, PRIsh= 0.02675×log(LUEsh) + 0.01619, where, the subscript su, sh represent sunlit and shaded leaves respectively; p< 0.0001, R2
Institute of Scientific and Technical Information of China (English)
农秀丽; 李玲玲
2012-01-01
对非齐次约束线性回归模型的狭义条件根方估计和广义条件根方估计进行讨论.利用相对效率定义比较两种根方估计的效率,证明在一定条件下,广义条件根方估计的效率不低于狭义条件根方估计,在根方参数的限制下比较了它们的下界之间的关系,从而可选择适当的根方参数,使广义条件根方估计就均方误差而言更具有良好的性质.%This paper discloses the generalized conditional root square estimation and the narrow sense conditional root square estimation that the relative efficiency has in inhomogeneous equality restricted linear model. It is shown that the generalized conditional root squares estimation has no smaller the relative efficiency than the narrow sense conditional root square estimation. By a constraint condition in root squares parameter, we compare bounds of them, thus, choose appropriate squares parameter, the generalized conditional root square estimation has good nature on terms mean squares error.
Energy Technology Data Exchange (ETDEWEB)
Gonzales, John
2015-04-02
Presentation by Senior Engineer John Gonzales on Evaluating Investments in Natural Gas Vehicles and Infrastructure for Your Fleet using the Vehicle Infrastructure Cash-flow Estimation (VICE) 2.0 model.
National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...
Asympotic behavior of the total length of external branches for Beta-coalescents
Dhersin, Jean-Stephane
2012-01-01
We consider a ${\\Lambda}$-coalescent and we study the asymptotic behavior of the total length $L^{(n)}_{ext}$ of the external branches of the associated $n$-coalescent. For Kingman coalescent, i.e. ${\\Lambda}={\\delta}_0$, the result is well known and is useful, together with the total length $L^{(n)}$, for Fu and Li's test of neutrality of mutations% under the infinite sites model asumption . For a large family of measures ${\\Lambda}$, including Beta$(2-{\\alpha},{\\alpha})$ with $0<\\alpha<1$, M{\\"o}hle has proved asymptotics of $L^{(n)}_{ext}$. Here we consider the case when the measure ${\\Lambda}$ is Beta$(2-{\\alpha},{\\alpha})$, with $1<\\alpha<2$. We prove that $n^{{\\alpha}-2}L^{(n)}_{ext}$ converges in $L^2$ to $\\alpha(\\alpha-1)\\Gamma(\\alpha)$. As a consequence, we get that $L^{(n)}_{ext}/L^{(n)}$ converges in probability to $2-\\alpha$. To prove the asymptotics of $L^{(n)}_{ext}$, we use a recursive construction of the $n$-coalescent by adding individuals one by one. Asymptotics of the distributi...
On asympotic behavior of solutions to several classes of discrete dynamical systems
Institute of Scientific and Technical Information of China (English)
LIAO; Xiaoxin(廖晓昕)
2002-01-01
In this paper, a new complete and simplified proof for the Husainov-Nikiforova Theorem is given. Then this theorem is generalized to the case where the coefficients may have different signs as well as nonlinear systems. By these results, the robust stability and the bound for robustness for high-order interval discrete dynamical systems are studied, which can be applied to designing stable discrete control system as well as stabilizing a given unstable control system.
Houssou, Nazaire; Zeller, Manfred
2009-01-01
Accurate targeting is key for the success of any development policy. While a number of factors might explain low targeting efficiency such as governance failure, political interference or lack of political will, this paper focuses on improving indicator-based models that identify poor households and smallholder farmers more accurately. Using stepwise regressions along with out-of-sample validation tests and receiver operating characteristic curves, this paper develops proxy means tests models...
Institute of Scientific and Technical Information of China (English)
张英冕
2012-01-01
Proper evaluation method of the marketing costs is the basis to enhance marketing cost efficiency. This article starts from analyzing the effect factors of the marketing cost efficiency, researches the assessment methods of marketing cost efficiency, and puts forward marketing cost control recommendations. Hope to provide valuable references to telecommunication enterl3rises' cost allocatinn and use.%正确地评价营销成本的使用状况是提升营销成本使用效率的基础，文章以此为出发点，从分析影响营销成本使用效率的因素入手，研究营销成本使用效率评估的方法，提出营销成本管控的建议，希望能对电信企业营销成本的配置和使用提供参考。
Directory of Open Access Journals (Sweden)
Gonzalo González-Rey
2013-05-01
Full Text Available En el trabajo se propone un procedimiento general para estimar la eficiencia de engranajes de tornillo sinfín cilíndrico considerando pérdidas de potencia por fricción entre los flancos conjugados. El referido procedimiento tiene sus bases en dos modelos matemáticos desarrollados con relaciones teóricas y empíricas presentes en el Reporte Técnico ISO 14521. Los modelos matemáticos elaborados son orientados a evaluar la eficiencia de engranajes de tornillo sinfín cilíndrico en función de la geometría del engranaje, de condiciones de la aplicación y características de fabricación del tornillo y la rueda dentada. El procedimiento fue validado por comparación con valores de eficiencia reportados para unidades deengranajes fabricadas por una compañía especializada en engranajes. Finalmente, haciendo uso del referido procedimiento son establecidas soluciones al problema de mejorar la eficiencia de estos engranajes mediante la recomendación racional de parámetros geométricos y de explotación.Palabras claves: eficiencia, engranaje de tornillo sinfín, diseño racional, modelo matemático, ISO/TR 14521._______________________________________________________________________________AbstractIn this study, a general procedure is proposed for the prediction of cylindrical worm gear efficiency taking into account friction losses between worm and wheel gear. The procedure is based in two mathematical models developed with empiric relations and theoretical formulas presented on ISO/TR 14521. Mathematical models are oriented to evaluate the worm gear efficiency with interrelation of gear geometry, manufacturing and working parameters. The validation of procedure was achieved by comparing with values of efficiency for worm gear units referenced by a German gear manufacturer company. Finally, some important recommendations to increase worm gear efficiency by means of rational gear geometry and application parameters are presented.Key words
Titos, G.; Foyo-Moreno, I.; Lyamani, H.; Querol, X.; Alastuey, A.; Alados-Arboledas, L.
2012-02-01
We investigated aerosol optical properties, mass concentration and chemical composition over a 1 year period (from March 2006 to February 2007) at an urban site in Southern Spain (Granada, 37.18°N, 3.58°W, 680 m above sea level). Light-scattering and absorption measurements were performed using an integrating nephelometer and a MultiAngle Absorption Photometer (MAAP), respectively, with no aerosol size cut-off and without any conditioning of the sampled air. PM10 and PM1 (ambient air levels of atmospheric particulate matter finer than 10 and 1 microns) were collected with two high volume samplers, and the chemical composition was investigated for all samples. Relative humidity (RH) within the nephelometer was below 50% and the weighting of the filters was also at RH of 50%. PM10 and PM1 mass concentrations showed a mean value of 44 ± 19 μg/m3 and 15 ± 7 μg/m3, respectively. The mineral matter was the major constituent of the PM10-1 fraction (contributing more than 58%) whereas organic matter and elemental carbon (OM+EC) contributed the most to the PM1 fraction (around 43%). The absorption coefficient at 550 nm showed a mean value of 24 ± 9 Mm-1 and the scattering coefficient at 550 nm presented a mean value of 61 ± 25 Mm-1, typical of urban areas. Both the scattering and the absorption coefficients exhibited the highest values during winter and the lowest during summer, due to the increase in the anthropogenic contribution and the lower development of the convective mixing layer during winter. A very low mean value of the single scattering albedo of 0.71 ± 0.07 at 550 nm was calculated, suggesting that urban aerosols in this site contain a large fraction of absorbing material. Mass scattering and absorption efficiencies of PM10 particles exhibited larger values during winter and lower during summer, showing a similar trend to PM1 and opposite to PM10-1. This seasonality is therefore influenced by the variations on PM composition. In addition, the mass
Direct Density Derivative Estimation.
Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi
2016-06-01
Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943
Directory of Open Access Journals (Sweden)
T.J. Akingbade
2014-09-01
Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.
Chono, Sumio; Tanino, Tomoharu; Seki, Toshinobu; Morimoto, Kazuhiro
2008-10-01
The efficacy of pulmonary administration of liposomal ciprofloxacin (CPFX) in pneumonia was evaluated. In brief, the pharmacokinetics following pulmonary administration of liposomal CPFX (particle size, 1,000 nm; dose, 200 microg/kg) were examined in rats with lipopolysaccharide-induced pneumonia as an experimental pneumonia model. Furthermore, the antibacterial effects of liposomal CPFX against the pneumonic causative organisms were estimated by pharmacokinetic/pharmacodynamic (PK/PD) analysis. The time-courses of the concentration of CPFX in alveolar macrophages (AMs) and lung epithelial lining fluid (ELF) following pulmonary administration of liposomal CPFX to rats with pneumonia were markedly higher than that following the administration of free CPFX (200 microg/kg). The time course of the concentrations of CPFX in plasma following pulmonary administration of liposomal CPFX was markedly lower than that in AMs and ELF. These results indicate that pulmonary administration of liposomal CPFX was more effective in delivering CPFX to AMs and ELF compared with free CPFX, and it avoids distribution of CPFX to the blood. According to PK/PD analysis, the liposomal CPFX exhibited potent antibacterial effects against the causative organisms of pneumonia. This study indicates that pulmonary administration of CPFX could be an effective technique for the treatment of pneumonia.
International Nuclear Information System (INIS)
To evaluate the effectiveness of the iterative decomposition of water and fat with echo asymmetric and least-squares estimation (IDEAL) MRI to quantify tumour infiltration into the lumbar vertebrae in myeloma patients without visible focal lesions. The lumbar spine was examined with 3 T MRI in 24 patients with multiple myeloma and in 26 controls. The fat-signal fraction was calculated as the mean value from three vertebral bodies. A post hoc test was used to compare the fat-signal fraction in controls and patients with monoclonal gammopathy of undetermined significance (MGUS), asymptomatic myeloma or symptomatic myeloma. Differences were considered significant at P 2-microglobulin-to-albumin ratio were entered into the discriminant analysis. Fat-signal fractions were significantly lower in patients with symptomatic myelomas (43.9 ±19.7%, P 2-microglobulin-to-albumin ratio facilitated discrimination of symptomatic myeloma from non-symptomatic myeloma in patients without focal bone lesions. circle A new magnetic resonance technique (IDEAL) offers new insights in multiple myeloma. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Takasu, Miyuki; Tani, Chihiro; Sakoda, Yasuko; Ishikawa, Miho; Tanitame, Keizo; Date, Shuji; Akiyama, Yuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Graduate School of Biomedical Sciences, Hiroshimashi (Japan); Sakai, Akira [Hiroshima University, Department of Hematology and Oncology, Research Institute for Radiation Biology and Medicine, Hiroshimashi (Japan); Asaoku, Hideki [Hiroshima Red Cross Hospital and Atomic-bomb Survivors Hospital, Department of Hematology, Hiroshimashi (Japan); Kajima, Toshio [Kajima Clinic, Hiroshimaken (Japan)
2012-05-15
To evaluate the effectiveness of the iterative decomposition of water and fat with echo asymmetric and least-squares estimation (IDEAL) MRI to quantify tumour infiltration into the lumbar vertebrae in myeloma patients without visible focal lesions. The lumbar spine was examined with 3 T MRI in 24 patients with multiple myeloma and in 26 controls. The fat-signal fraction was calculated as the mean value from three vertebral bodies. A post hoc test was used to compare the fat-signal fraction in controls and patients with monoclonal gammopathy of undetermined significance (MGUS), asymptomatic myeloma or symptomatic myeloma. Differences were considered significant at P < 0.05. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio were entered into the discriminant analysis. Fat-signal fractions were significantly lower in patients with symptomatic myelomas (43.9 {+-}19.7%, P < 0.01) than in the other three groups. Discriminant analysis showed that 22 of the 24 patients (92%) were correctly classified into symptomatic or non-symptomatic myeloma groups. Fat quantification using the IDEAL sequence in MRI was significantly different when comparing patients with symptomatic myeloma and those with asymptomatic myeloma. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio facilitated discrimination of symptomatic myeloma from non-symptomatic myeloma in patients without focal bone lesions. circle A new magnetic resonance technique (IDEAL) offers new insights in multiple myeloma. (orig.)
Rakhmatullina, E M; Sanam'ian, M F
2007-05-01
Cytogenetic analysis of M2 plants after irradiation of cotton by thermal neutrons was performed in 56 families. In 40 plants of 27 M2 families, different abnormalities of chromosome pairing were found. These abnormalities were caused by primary monosomy, chromosomal interchange, and desynapsis. The presence of chromosome aberrations in some cases decreased meiotic index and pollen fertility. Comparison of the results of cytogenetics analysis, performed in M1 and M2 after irradiation, showed a nearly two-fold decrease in the number of plants with chromosomal aberrations in M2, as well as narrowing of the spectrum of these aberrations. The latter result is explained by the fact that some mutations are impossible to detect in subsequent generations because of complete or partial sterility of aberrant M1 plants. It was established that the most efficient radiation doses for inducing chromosomal aberrations in the present study were 15 and 25 Gy, since they affected survival and fertility of altered plant to a lesser extent.
Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew
2009-01-01
Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.
Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee
2016-04-01
In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset.
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...
Multi-directional program efficiency
DEFF Research Database (Denmark)
Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth
2016-01-01
approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....
Institute of Scientific and Technical Information of China (English)
石鸟云; 周星
2012-01-01
This paper uses SFA model to estimate the technical efficiency of 15 Chinese power enterprises, based on the data from 2003 to 2009. The study reveals that the technical efficiency of 15 Chinese power enterprises has been increasing every year, while the speed of increasing has been slowing down year by year. The main restraint factors include technology and management efficiency. The insignificant differences on technical efficiency among Chinese power enterprises weaken their motion of innovation. The factors that affect the technical efficiency of Chinese power enterprises include internal transaction cost, human capital investment and specific capital investment. What affect internal transaction cost consist of organization structure, work flow and incentive mechanism. The effect of human capital investment can be explained from the point of view of internal synergy effects and coordination cost.%利用SFA模型对我国15家电力企业2003-2009年的技术效率进行评价.研究发现:我国电力企业技术效率呈逐年递增的趋势,但增长速度逐年下降,技术水平和管理效率是制约技术效率提升的两大因素.电力企业之间技术效率差异较小,弱化了企业创新的动力.影响电力企业技术效率的因素主要包括内部交易成本、人力资本投资和专用性资本投资,其中影响我国电力企业内部交易成本的因素包括组织结构、业务流程和员工激励机制,人力资本投资对技术效率的影响可以从内部协同效应和协调成本的角度来解释.
The production factors efficiency estimation of the
Directory of Open Access Journals (Sweden)
Sergei A. Aivasian
2011-05-01
Full Text Available The results of use of stochastic frontier methodology for the analysis of the intellectual capital and other major manufacture factors influence on the quantity indicators, the competitiveness of the company characterizing, are described.
Estimation of Line Efficiency by Aggregation
M.B.M. de Koster (René)
1987-01-01
textabstractPresents a multi-stage flow lines with intermediate buffers approximated by two-stage lines using repeated aggregation. Characteristics of the aggregation method; Problems associated with the analysis and design of production lines.
Virtual Sensors: Efficiently Estimating Missing Spectra
National Aeronautics and Space Administration — Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding...
DEFF Research Database (Denmark)
Lindström, Erik; Ionides, Edward; Frydendall, Jan;
2012-01-01
-Rao efficient. The proposed estimator is easy to implement as it only relies on non-linear filtering. This makes the framework flexible as it is easy to tune the implementation to achieve computational efficiency. This is done by using the approximation of the score function derived from the theory on Iterative...
Re-estimation and determinants of regional technical efficiency in China%我国区域技术效率的再估计及区位因素分析
Institute of Scientific and Technical Information of China (English)
岳意定; 刘贯春; 贺磊
2013-01-01
对CSSW、CSSG、BC及KSS四种经典随机前沿模型进行了比较分析,并基于此对我国区域1997-2010年间技术效率水平进行了再估计,进一步重点研究了区域技术效率的区位因素.研究发现:(1)格兰杰因果检验、Hausman-Wu检验及随机误差项正态分布检验结果共同显示,利用对数型柯布-道格拉斯生产函数对区域技术效率进行测算时,模型本身存在内生性,前人采用BC模型得到的结果可信度低下；考虑到在内生性及技术非效率项处理上的完备性,认为KSS模型测算结果相对更加可信.(2)无论是整体还是地区,平均技术效率均呈现涨跌互动的波动趋势,与前人得出的单调递增(减)结论差异显著；2008年之前,东部地区平均技术效率最高,中部次之,西部最低,2008年之后中部地区平均技术效率赶上东部,且两者之间差距呈现扩大趋势.(3)地理位置、财政科技投入、高科技产业规模、人口素质和外商直接投资是导致当前区域技术效率差异显著的关键因素.%Based on the comparisons of the CSSW,CSSG,BC and KSS stochastic frontier analysis estimators,this paper analysis regional technical efficiency in China.Besides,we explain the trend differences among regions technical efficiency from the perspective of determinants.The main findings are:(1) Granger causality test,Hausman-Wu test and normality test of random error term show that the model itself has the endogenous problem on the basis of Cobb-Douglas production function,the results of previous research based on BC are untrusted.Considering the superiority in the technical inefficiency and endogeneity,the KSS estimator is the relatively most reliable.(2) Global and regional average technical efficiencies present fluctuant character and different from the previous conclusions significantly.Prior to 2008,the technical efficiency of eastern region is the highest,followed by central and western regions,while the central
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Shrinkage estimators for covariance matrices.
Daniels, M J; Kass, R E
2001-12-01
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer parameters, such as compound symmetry or independence. However, these estimators will not be consistent unless the hypothesized structure is correct. If interest focuses on estimation of regression coefficients with correlated (or longitudinal) data, a sandwich estimator of the covariance matrix may be used to provide standard errors for the estimated coefficients that are robust in the sense that they remain consistent under misspecification of the covariance structure. With large matrices, however, the inefficiency of the sandwich estimator becomes worrisome. We consider here two general shrinkage approaches to estimating the covariance matrix and regression coefficients. The first involves shrinking the eigenvalues of the unstructured ML or REML estimator. The second involves shrinking an unstructured estimator toward a structured estimator. For both cases, the data determine the amount of shrinkage. These estimators are consistent and give consistent and asymptotically efficient estimates for regression coefficients. Simulations show the improved operating characteristics of the shrinkage estimators of the covariance matrix and the regression coefficients in finite samples. The final estimator chosen includes a combination of both shrinkage approaches, i.e., shrinking the eigenvalues and then shrinking toward structure. We illustrate our approach on a sleep EEG study that requires estimation of a 24 x 24 covariance matrix and for which inferences on mean parameters critically
Parameters estimation in quantum optics
D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.
2000-01-01
We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.A quantidade de água no solo está disponível para as culturas de acordo com os limites de valores específicos do potencial da água. A determinação dos limites hidrofísicos, identificada como ponto de murcha permanente (permanent wilting point - PWP e capacidade de campo (field capacity - FC, envolve a escolha de
Efficiency in Microfinance Cooperatives
Directory of Open Access Journals (Sweden)
HARTARSKA, Valentina
2012-12-01
Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.
Energy Technology Data Exchange (ETDEWEB)
Asociacion de Tecnicos y Profesionistas en Aplicacion Energetica, A.C. [Mexico (Mexico)
2002-06-01
In the last years much attention has been given to the polluting gas discharges, in special of those that favor the green house effect (GHE), due to the negative sequels that its concentration causes to the atmosphere, particularly as the cause of the increase in the overall temperature of the planet, which has been denominated world-wide climatic change. There are many activities that allow to lessen or to elude the GHE gas emissions, and with the main ones the so-called projects of Energy Efficiency and Renewable Energy (EE/RE) have been structured. In order to carry out a project within the frame of the MDL, it is necessary to evaluate with quality, precision and transparency, the amount of emissions of GHE gases that are reduced or suppressed thanks to their application. For that reason, in our country we tried different methodologies directed to estimate the CO{sub 2} emissions that are attenuated or eliminated by means of the application of EE/RE projects. [Spanish] En los ultimos anos se ha puesto mucha atencion a las emisiones de gases contaminantes, en especial de los que favorecen el efecto invernadero (GEI), debido a las secuelas negativas que su concentracion ocasiona a la atmosfera, particularmente como causante del aumento en la temperatura general del planeta, en lo que se ha denominado cambio climatico mundial. Existen muchas actividades que permiten aminorar o eludir las emisiones de GEI, y con las principales se han estructurado los llamados proyectos de eficiencia energetica y energia renovables (EE/ER). Para llevar a cabo un proyecto dentro del marco del MDL, es necesario evaluar con calidad, precision y transparencia, la cantidad de emisiones de GEI que se reducen o suprimen gracias a su aplicacion. Por ello, en nuestro pais ensayamos diferentes metodologias encaminadas a estimar las emisiones de CO{sub 2} que se atenuan o eliminan mediante la aplicacion de proyectos de EE/ER.
Distribution system state estimation
Wang, Haibin
With the development of automation in distribution systems, distribution SCADA and many other automated meters have been installed on distribution systems. Also Distribution Management System (DMS) have been further developed and more sophisticated. It is possible and useful to apply state estimation techniques to distribution systems. However, distribution systems have many features that are different from the transmission systems. Thus, the state estimation technology used in the transmission systems can not be directly used in the distribution systems. This project's goal was to develop a state estimation algorithm suitable for distribution systems. Because of the limited number of real-time measurements in the distribution systems, the state estimator can not acquire enough real-time measurements for convergence, so pseudo-measurements are necessary for a distribution system state estimator. A load estimation procedure is proposed which can provide estimates of real-time customer load profiles, which can be treated as the pseudo-measurements for state estimator. The algorithm utilizes a newly installed AMR system to calculate more accurate load estimations. A branch-current-based three-phase state estimation algorithm is developed and tested. This method chooses the magnitude and phase angle of the branch current as the state variable, and thus makes the formulation of the Jacobian matrix less complicated. The algorithm decouples the three phases, which is computationally efficient. Additionally, the algorithm is less sensitive to the line parameters than the node-voltage-based algorithms. The algorithm has been tested on three IEEE radial test feeders, both the accuracy and the convergence speed. Due to economical constraints, the number of real-time measurements that can be installed on the distribution systems is limited. So it is important to decide what kinds of measurement devices to install and where to install them. Some rules of meter placement based
Institute of Scientific and Technical Information of China (English)
蒋磊; 杨雨亭; 尚松浩
2013-01-01
为评价干旱区灌区的灌溉效率，该文以作物生长期灌溉地的蒸散发扣除降水量作为灌溉水的有效利用量，将灌溉水有效利用量与灌溉净引水量（总引水量减去退、排水量）的比值定义为灌溉水有效利用系数。利用遥感蒸散发模型可以较为准确地估算灌溉地蒸散发，从而可以避免传统灌溉水利用系数评估中难以准确估算通过灌溉到达作物根系层水量的问题。以河套灌区为研究对象，利用遥感蒸散发模型（surface energy balance algorithm for land，SEBAL）计算了区域内灌溉地作物生育期的蒸散发量，并结合降水量与净引水量的观测资料，对节水改造以来（2000-2010年）河套灌区灌溉水有效利用系数进行了分析和评价。结果表明，灌溉水有效利用系数近年来有增加趋势，同时灌溉水有效利用系数随降水量和净引水量的减小而增大，减少供水对灌溉水有效利用系数的影响要大于灌区节水改造工程的影响。另一方面，在灌区净引水量减少的情况下，灌溉地蒸发量能够维持在较稳定的水平，反映了近年来灌区节水改造的效果较好。%To evaluate the irrigation efficiency of irrigation districts in arid regions where crop growth relies heavily on irrigation, a new evaluation indicator, coefficient of irrigation water effective utilization, was proposed. The difference of evapotranspiration and precipitation in irrigated land during the crop growing season was considered as the effective use of irrigation water, and the ratio of effective use of irrigation water and net water diversion to the irrigation district was defined as the coefficient of irrigation water effective utilization (ηe). With the development of a remote sensing evapotranspiration technique in recent decades, spatial and temporal variations of evapotranspiration can be estimated with acceptable precision. Thenηe can easily be estimated
Energy Technology Data Exchange (ETDEWEB)
Perez-Comas, Jose A.; Skalski, John R. (University of Washington, School of Fisheries, Seattle, WA)
2000-07-01
In the advent of the installation of a PIT-tag interrogation system in the Cascades Island fish ladder at Bonneville Dam, this report provides guidance on the anticipated precision of salmonid estuarine and marine survival estimates, for various levels of system-wide adult detection probability at Bonneville Dam. Precision was characterized by the standard error of the survival estimates and the coefficient of variation of the survival estimates. The anticipated precision of salmonid estuarine and marine survival estimates was directly proportional to the number of PIT-tagged smolts released and to the system-wide adult detection efficiency at Bonneville Dam, as well as to the in-river juvenile survival above Lower Granite Dam. Moreover, for a given release size and system-wide adult detection efficiency, higher estuarine and marine survivals did also produce more precise survival estimates. With a system-wide detection probability of P{sub BA} = 1 at Bonneville Dam, the anticipated CVs for the estuarine and marine survival ranged between 41 and 88% with release sizes of 10,000 smolts. Only with the 55,000 smolts being released from sites close to Lower Granite Dam and under high estuarine and marine survival, could CVs of 20% be attained with system detection efficiencies of less than perfect detection (i.e., P{sub BA} < 1).
Institute of Scientific and Technical Information of China (English)
苏芮; 陈亚宁; 张燕; 李卫红; 冷超
2011-01-01
Water footprint based on virtual water is a comprehensive index and gives a true reflection of the share of human consumption. It effectively measures human consumption of water resources. This paper investigated virtual water consumption structure by using water footprint, and estimated water efficiency through calculations of hydrology water scarcity index, social water scarcity index and water intensive use degree in cities and rural areas of Xinjiang from 1995 to 2007. The results indicated a 45.4％ increase in total water footprint, rising from 108.95×108 m3 in 1995 to 158.40×108 m3 in 2007. The average diversity index of virtual water consumption in cities exceeded that in rural areas by 0.69 suggesting more reasonable virtual water consumption structure in cities.The increasing diversity index of consumption across different social groups indicated gradual diversification of citizen's consumables and improvement in diet structure. There was an increase in water intensive use degree from 7.58 in 1995 to 22.22 in 2007. It indicated marked improvements in the development of water efficiency.%水足迹模型是评估人类消费结构对水资源影响的有效方法.结合1995～2007年的统计资料,运用水足迹模型,对新疆城乡居民虚拟水消费结构进行综合评价,并引入水文和社会水资源稀缺度、集约利用度等指标,考察城乡居民用水效率.结果表明:1995～2007年间新疆全社会总水资源足迹呈现上升趋势,从1995年的108.95×l0m3上升到2007年的158.40×10~8m3,增幅达45.4%;城镇居民虚拟水消费结构多样性指数平均值比农村居民高0.69,说明城镇居民的虚拟水消费结构更加合理;不同社会群体的消费多样性指数呈上升趋势,表明居民消费品分布逐步分散化,饮食结构单一化在逐步改善;水资源集约利用度从1995年的7.58上升到2007年的22.22,水资源利用效率有较明显提高.
Energy Technology Data Exchange (ETDEWEB)
Perez-Comas, Joes A.; Skalski, John R. (University of Washington, School of Fisheries, Seattle, WA)
2000-07-01
In the advent of the installation of a PIT-tag interrogation system in the Cascades Island fish ladder at Bonneville Dam, this report provides guidance on the anticipated precision of in-river survival estimates for returning adult salmonids, between Bonneville and Lower Granite dams, for various levels of system-wide adult detection probability at Bonneville Dam. Precision was characterized by the standard error of the survival estimates and the coefficient of variation of the survival estimates. The anticipated precision of in-river survival estimates for returning adult salmonids was directly proportional to the number of PIT-tagged smolts released and to the system-wide adult detection efficiency at Bonneville Dam, as well as to the in-river juvenile survival above Lower Granite Dam. Moreover, for a given release size and system-wide adult detection efficiency at Bonneville Dam, higher estuarine and marine survival rates also produced more precise survival estimates. With a system-wide detection probability of P{sub BA} = 1 at Bonneville Dam, the anticipated CVs for in-river survival estimate ranged between 9.4 and 20% with release sizes of 10,000 smolts. Moreover, if the system-wide adult detection efficiency at Bonneville Dam is less than maximum (i.e., P{sub BA} < 1), precision of CV {le} 20% could still be attained. For example, for releases of 10,000 PIT-tagged fish a CV of 20% in the estimates of in-river survival for returning adult salmon could be reach with system-wide detection probabilities of 0.2 {le} P{sub BA} {le} 0.6, depending on the tagging scenario.
Directory of Open Access Journals (Sweden)
Francisco Cobos
2007-01-01
Full Text Available OSIRIS, the main optical (360-1000nm 1st- generation instrument for GTC, is being inte- grated. Except for some grisms and filters, all main optical components are finished and be- ing characterized. Complementing laboratory data with semi-empirical estimations, the cur- rent OSIRIS efficiency is summarized.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...
Attitude Estimation or Quaternion Estimation?
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
Isotonic inverse estimators for nonparametric deconvolution
van Es, B.; Jongbloed, G.; M. Van Zuijlen
1998-01-01
A new nonparametric estimation procedure is introduced for the distribution function in a class of deconvolution problems, where the convolution density has one discontinuity. The estimator is shown to be consistent and its cube root asymptotic distribution theory is established. Known results on the minimax risk for the estimation problem indicate the estimator to be efficient.
Institute of Scientific and Technical Information of China (English)
成林; 方文松
2015-01-01
Investigating the influencing rule of climate change on water use efficiency (WUE)of rain-fed winter wheat can offer scientific reference for agriculture adapting to climate change.Based on yield information and observed soil water data at representative stations,the historical trend of WUE is analyzed.Simulation models for meteorological yield and soil water variation quantity are established,and four different kinds of climate change scenarios,which are outputs by regional climate models of PRECIS and REGCM 4.0 are combined to estimate the probable variation trend of WUE in the future years of 2021 -2050 for rain-fed wheat.It is validated that in the basic scenario years,simulated yields by the combination of two regional climate models with meteorological yield simulation model are close to actual values,so methods for esti-mating future yield of wheat is proved feasible.Results by data analyzing shows that the average yield for representative stations varies as a cubic curve during the last 30 years of 1981-2010,and grows faster be-fore the year of 2000.Water consumption of wheat also increases with fluctuating.The average WUE val-ue of rain-fed wheat for representative stations in Gansu,Shanxi and Henan are 13.19 kg·mm-1 ·hm-2 , 12.86 kg·mm-1 ·hm-2 and 11.28 kg·mm-1 ·hm-2 ,respectively.The varying trend of WUE is similar to a quadratic curve,and the maximum value appears in the year of 2003.Estimation results under four different climate change scenarios shows that in 2021-2050,water consumption of winter wheat would in-crease dramatically,and the increasing amount could reach to 6.2% for all the representative stations and all scenarios averagely.Yields in the future would decrease and some increase,and the variation rate would be 1.4% on average.The value of WUE would decrease 3.8% on average,meanwhile,the variability rate would also decrease.The increase of water consumption would be the main cause for WUE decreasing in the future.From the inter
Channel estimation in TDD mode
Institute of Scientific and Technical Information of China (English)
ZHANG Yi; GU Jian; YANG Da-cheng
2006-01-01
An efficient solution is proposed in this article for the channel estimation in time division duplex (TDD) mode wireless communication systems. In the proposed solution, the characteristics of fading channels in TDD mode systems are fully exploited to estimate the path delay of the fading channel.The corresponding amplitude is estimated using the minimum mean square error (MMSE) criterion. As a result, it is shown that the proposed novel solution is more accurate and efficient than the traditional solution, and the improvement is beneficial to the performance of Joint Detection.
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
Two New Relative Efficiency of the Parameter Estimate in the Growth Curve Model%生长曲线模型中参数估计的两种新的相对效率
Institute of Scientific and Technical Information of China (English)
段清堂; 归庆明
2001-01-01
This paper gives two new relative efficiency of parameter in the growth curve model. We give the lower bounds and relations of two new relative efficiency.%对生长曲线模型中参数估计的相对效率，本文提出了两种新的定义，并给出了新的相对效率的下界及讨论了它们之间的关系.
Estimation of Tobit Type Censored Demand Systems
DEFF Research Database (Denmark)
Barslund, Mikkel Christoffer
Recently a number of authors have suggested to estimate censored demand systems as a system of Tobit multivariate equations employing a Quasi Maximum Likelihood (QML) estimator based on bivariate Tobit models. In this paper I study the efficiency of this QML estimator relative to the...... asymptotically more efficient Simulated ML (SML) estimator in the context of a censored Almost Ideal demand system. Further, a simpler QML estimator based on the sum of univariate Tobit models is introduced. A Monte Carlo simulation comparing the three estimators is performed on three different sample sizes. The...... the use of simple etimators for more general censored systems of equations...
Efficient Computation Of Confidence Intervals Of Parameters
Murphy, Patrick C.
1992-01-01
Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.
Improving efficiency in stereology
DEFF Research Database (Denmark)
Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm;
2013-01-01
of the study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned......, a proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....
Measuring Residential Energy Efficiency Improvements with DEA
Grösche, Peter
2008-01-01
This paper measures energy efficiency improvements of US single-family homes between 1997 and 2001 using a two-stage procedure. In the first stage, an indicator of energy efficiency is derived by means of Data Envelopment Analysis (DEA), and the analogy between the DEA estimator and traditional measures of energy efficiency is demonstrated. The second stage employs a bootstrapped truncated regression technique to decompose the variation in the obtained efficiency estimates into a climatic com...
Bank Efficiency and Executive Compensation
Timothy King; Jonathan Williams
2013-01-01
We investigate whether handsomely rewarding bank executives’ realizes superior efficiency by determining if executive remuneration contracts produce incentives that offset potential agency problems and lead to improvements in bank efficiency. We calculate executive Delta and Vega to proxy executives’ risk-taking following changes in their compensation contracts and estimate their relationship with alternative profit efficiency. Our study uses novel instruments to account for the potentially e...
Institute of Scientific and Technical Information of China (English)
刘承彬; 耿也; 舒奎; 高真香子
2012-01-01
RSA algorithms play an important role in the public key cryptography. Its computational efficiency have an immediately correlation with the efficiency of modular exponentiation implementation. In this paper the general formula for multiple primes of RSA algorithm were given by reducing the number of modular exponentiation, recover the original simply and fast. A formula for estimating efficiency also was given to calculate the efficiency of acceleration by estimating, which can provide the basis for the most appropriate numbers for RSA.%RSA算法在公钥密码体制中占有重要的地位,它的计算效率与模幂运算的实现效率有着直接关联.本实验在基于使用中国剩余定理简化的RSA解密算法的条件下,给出多个素数情况下的解密通用公式,通过减少大量的模幂运算,迅速简单地恢复出原文.并给出了效率提升估算公式,通过估算求出加速效率,为确定使用多少个素数最为合适提供依据.
DEFF Research Database (Denmark)
Andersen, Rikke Sand; Vedsted, Peter
2015-01-01
on institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....
Katsuya Takii
2004-01-01
This paper examines a particular aspect of entrepreneurship, namely firms' ability to respond appropriately to unexpected changes in the environment (i.e., their adaptability). An increase in firms' adaptability improves allocative efficiency in a competitive economy, but can reduce it when opportunities are distorted. It is shown that adaptability can aggravate distortions in the presence of political risk. Because efficiency affects the total factor productivity (TFP) of an economy, the mod...
Environmental Efficiency Analysis of China's Vegetable Production
Institute of Scientific and Technical Information of China (English)
TAO ZHANG; BAO-DI XUE
2005-01-01
Objective To analyze and estimate the environmental efficiency of China's vegetable production. Methods The stochastic translog frontier model was used to estimate the technical efficiency of vegetable production. Based on the estimated frontier and technical inefficiency levels, we used the method developed by Reinhard, et al.[1] to estimate the environmental efficiency. Pesticide and chemical fertilizer inputs were treated as environmentally detrimental inputs. Results From estimated results, the mean environmental efficiency for pesticide input was 69.7%, indicating a great potential for reducing pesticide use in China's vegetable production. In addition, substitution and output elasticities for vegetable farms were estimated to provide farmers with helpful information on how to reallocate input resources and improve efficiency. Conclusion There exists a great potential for reducing pesticide use in China's vegetable production.
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...
DEFF Research Database (Denmark)
Stoustrup, Jakob; Niemann, H.
2002-01-01
This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems....
Estimating Functions and Semiparametric Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
1996-01-01
of the attainability of the bounds for the concentration of regular asymptotic linear estimating sequences by estimators derived from estimating functions. The main class of models considered in the second part of the thesis (chapter 5) are constructed by assuming that the expectation of a number of given square...... is developed in details in a context of semiparametric models. There does not to the knowledge of the author, exist any such systematic treatment of estimating functions for semiparametric models in the literature. The second part studies some classes of semiparametric models described below. The material...... contained in this part of the thesis constitutes an original contribution. There can be found the detailed characterization of the class of regular estimating functions, a calculation of efficient regular asymptotic linear estimating sequences (\\ie the classical optimality theory) and a discussion...
Energy Technology Data Exchange (ETDEWEB)
Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)
2010-04-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
HOQUE,Md. Azharul / SUZUKI,Keiichi / OIKAWA,Takuro
2007-01-01
A simulation study was performed for performance traits on 740 bulls and carcass traits on 1,774 progeny in Japanese Black cattle to compare the efficiency of direct and index selection. Performance traits included average daily gain (ADG), final body weight (BWF), metabolic body weight (MWT), feed intake (FI), feed conversion ratio (FCR) and residual feed intake (RFI). Progeny traits were carcass weight (CWT), rib eye area (REA), rib thickness (RBT), subcutaneous fat thickness (SFT), marblin...
Power Quality Indices Estimation Platform
Directory of Open Access Journals (Sweden)
Eliana I. Arango-Zuluaga
2013-11-01
Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computation schemes using combinatorial properties of generating functions. We demonstrate our approach with several case studies involving real world movie recommendation data. The results are comparable with state-of-the-art techniques while also providing probabilistic preference estimates outside the scope of traditional recommender systems.
Joint DOA and DOD Estimation in Bistatic MIMO Radar without Estimating the Number of Targets
Directory of Open Access Journals (Sweden)
Zaifang Xi
2014-01-01
established without prior knowledge of the signal environment. In this paper, an efficient method for joint DOA and DOD estimation in bistatic MIMO radar without estimating the number of targets is presented. The proposed method computes an estimate of the noise subspace using the power of R (POR technique. Then the two-dimensional (2D direction finding problem is decoupled into two successive one-dimensional (1D angle estimation problems by employing the rank reduction (RARE estimator.
Efficient ICT for efficient smart grids
Smit, Gerard J.M.
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a prerequi
Efficient ICT for efficient smart grids
Smit, Gerard J.M.
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a prerequisite for efficient Smart Grids. Unfortunately, efficiency and reliability have not always received the proper attention in the ICT domain in the past.
Efficiency of Hospitals in the Czech Republic: Conditional Efficiency Approach
Šťastná Lenka; Votápková, Jana
2014-01-01
The paper estimates cost efficiency of 81 general hospitals in the Czech Republic during 2006-2010. We employ the conditional order-m approach which is a nonparametric method for efficiency computation accounting for environmental variables. Effects of environmental variables are assessed using the non-parametric significance test and partial regression plots. We find not-for-profit ownership and a presence of a specialized center in a hospital to be detrimental to hospital performance in the...
Golbabaei-Asl, Mona; Knight, Doyle; Anderson, Kellie; Wilkinson, Stephen
2013-01-01
A novel method for determining the thermal efficiency of the SparkJet is proposed. A SparkJet is attached to the end of a pendulum. The motion of the pendulum subsequent to a single spark discharge is measured using a laser displacement sensor. The measured displacement vs time is compared with the predictions of a theoretical perfect gas model to estimate the fraction of the spark discharge energy which results in heating the gas (i.e., increasing the translational-rotational temperature). The results from multiple runs for different capacitances of c = 3, 5, 10, 20, and 40 micro-F show that the thermal efficiency decreases with higher capacitive discharges.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2016-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
The incredible shrinking covariance estimator
Theiler, James
2012-05-01
Covariance estimation is a key step in many target detection algorithms. To distinguish target from background requires that the background be well-characterized. This applies to targets ranging from the precisely known chemical signatures of gaseous plumes to the wholly unspecified signals that are sought by anomaly detectors. When the background is modelled by a (global or local) Gaussian or other elliptically contoured distribution (such as Laplacian or multivariate-t), a covariance matrix must be estimated. The standard sample covariance overfits the data, and when the training sample size is small, the target detection performance suffers. Shrinkage addresses the problem of overfitting that inevitably arises when a high-dimensional model is fit from a small dataset. In place of the (overfit) sample covariance matrix, a linear combination of that covariance with a fixed matrix is employed. The fixed matrix might be the identity, the diagonal elements of the sample covariance, or some other underfit estimator. The idea is that the combination of an overfit with an underfit estimator can lead to a well-fit estimator. The coefficient that does this combining, called the shrinkage parameter, is generally estimated by some kind of cross-validation approach, but direct cross-validation can be computationally expensive. This paper extends an approach suggested by Hoffbeck and Landgrebe, and presents efficient approximations of the leave-one-out cross-validation (LOOC) estimate of the shrinkage parameter used in estimating the covariance matrix from a limited sample of data.
Institute of Scientific and Technical Information of China (English)
LIAO Yu-Iin; ZHENG Sheng-xian; RONG Xiang-min; LIU Qiang; FAN Mei-rong
2010-01-01
A pot experiment combined with15 N isotope techniques was conducted to evaluate effects of the varying rates of urea.N fertilizer application on yields,quailty,and nitrogen use efficiency(NUE)of pakchoi cabbage(Brassica chinensis L.)and asparagus lettuce(Lactuca saiva L.).15 N-labbled urea(5.35 15 N atom%)was added to pots with 6.5kg soil of 0.14,0.18,0.21,0.25,and 0.29 g N/kg soil.and applied in two splits:60 percenl as basel dressing in the mixture and 40 percent as toodressing.The fresh yields of two vegetable species increased with the increasing input of urea-N,but there was a significant quadratic relationship between the dose of urea-N fertilizer application and the fresh yields.Whan the dosage of urea-N fertilizer reached a certain value,nitrate readily accumulated in the two kinds of plants due to the decrease in NR activity;furthermore,there was a linear nagative correlation between nitrate content and NR activity.With the increasing input of urea-N.ascorbic acid and soluble sugar initially increased,declined after a while,and crude fiber rapidly decreased too.Total absorbed N(TAN).N derived from fertilizer(Ndff),and N derived from soil(Ndfs)increased,and the ratio of Ndff and TAN also increased.but the ratio of Ndfs and TAN as well as NUE of urea-N fertilizer decreased with the increasing input of urea-N.These results suggested that the increasing application of labeled N fertilizer led lo the increase in unlabeled N(namely,Ndfs)presumably due to"added nitrogen interaction"(ANI),the decease in NUE of urea-N fertilizer may be due to excess fertilization beyond the levels of plant requirements and the ANI.and the decrease jn the two vege table yields with the increasing addition of urea-N possibly because the excess accumulation of nitrate reached a toxic level.
Technical efficiency of thermoelectric power plants
Energy Technology Data Exchange (ETDEWEB)
Barros, Carlos Pestana [Instituto de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Peypoch, Nicolas [GEREM, LAMPS, IAE, Universite de Perpignan Via Domitia, 52 avenue Paul Alduy, F-66860 Perpignan (France)
2008-11-15
This paper analyses the technical efficiency of Portuguese thermoelectric power generating plants with a two-stage procedure. In the first stage, the plants' relative technical efficiency is estimated with DEA (data envelopment analysis) to establish which plants perform most efficiently. These plants could serve as peers to help improve performance of the least efficient plants. The paper ranks these plants according to their relative efficiency for the period 1996-2004. In a second stage, the Simar and Wilson [Simar, L., Wilson, P.W., 2007. Estimation and inference in two-stage, semi-parametric models of production processes. Journal of Econometrics 136, 1-34] bootstrapped procedure is adopted to estimate the efficiency drivers. Economic implications arising from the study are considered. (author)
A Practical Method to Estimate Entrepreneurship's Reward
Georgiou, Miltiades N.
2005-01-01
In the present note, an effort will be made for a contribution to the economic theory by introducing a practical method to estimate entrepreneurship's reward. As an example, a regression, based on the estimation of entrepreneurship's reward, with baning panel data will yield the same main results as in the article of Governance Structures, Efficiency and Firm Profitability, by E. E. Lehmann, S. Warning and J. Weigand, MPI, that firms with more efficient governance have higher profitability.
Motor-operated gearbox efficiency
Energy Technology Data Exchange (ETDEWEB)
DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.
1996-12-01
Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.
Demchuk, Pavlo
Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represent. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo simulation with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches. Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite
Directory of Open Access Journals (Sweden)
A. de la Casa
2011-06-01
Full Text Available La eficiencia en el uso de la radiación (EUR de un cultivo es la relación entre la materia seca producida y la radiación fotosintéticamente activa interceptada (RFAI durante su ciclo. La fracción de radiación fotosintéticamente activa interceptada (fRFAI puede determinarse mediante el método tradicional de Beer, a partir del índice de área foliar (iaf, o empleando la cobertura del suelo (f, como medida subrogante de fRFAI. Cuando el iaf en papa supera 3, el valor de fRFAI cambia muy poco, haciendo muy difícil detectar diferencias debidas a variaciones en las condiciones del cultivo. El objetivo de este trabajo fue determinar la EUR en papa (Solanum tuberosum L. cv. Spunta, comparando el uso de valores de iaf y de f para obtener fRFAI. El ensayo se realizó en el cinturón verde de Córdoba, Argentina, sobre un cultivo de ciclo tardío entre febrero y mayo de 2008. La utilización del valor de f produjo resultados que sobreestiman EUR, como consecuencia de una sistemática subestimación de fRFAI, mientras que tomando valores de fRFAI corregidos de acuerdo a la relación con f previamente establecida, EUR resultó similar a la obtenida con el método de referencia, que presentó un valor de 2,90 gr MJ-1 PAR.The radiation use efficiency (RUE of a crop is the relationship between dry matter produced and the intercepted photosynthetically active radiation (IPAR during the growth cycle. The IPAR can be determined by applying Beer’s traditional method, that uses leaf area index (LAI, or taking ground cover (f as a surrogate of IPAR. When potato LAI exceeds 3, the IPAR values change very little, making it very difficult to detect differences due to variations in crop conditions. The aim of this study was to determine the RUE in potato (Solanum tuberosum L. cv. Spunta, comparing the use of LAI and f values to obtain IPAR. The trial was conducted in the green belt of Cordoba, Argentina, on a late season crop from February to May 2008. The
Institute of Scientific and Technical Information of China (English)
王昕天
2014-01-01
目的：为了更好的支持我国卫生资源分配，方法：文章根据2005-2010年有关我国居民健康、卫生资源和市场化指数相关数据资料，利用随机前沿分析和固定效应面板数据分析方法，通过测量各省市技术效率变化趋势得出相关结论，结果与结论：（1）各省市之间技术效率分布不均匀且普遍偏低；（2）技术效率在各区域之间的变化趋势存在明显差异；（3）政府对医疗卫生领域的投入应继续向中西部地区倾斜。%Objective: To support the distribution of health resources in China better. Methods: According to the related data of resident health, health resources and marketing index in China from 2005 to 2010, related conclusion was given by using stochastic frontier analysis and fixed effect panel analysis method to establish the change trend of technical efficiency in different provinces and cities. Results and Conclusion: (1) The distribution of technical efficiency in different provinces and cities are asymmetric and generally low;(2)there are obvious differences of technical efficiency among different areas;(3)government input in medical and health field should be focused on central and western areas.
Discharge estimation based on machine learning
Institute of Scientific and Technical Information of China (English)
Zhu JIANG; Hui-yan WANG; Wen-wu SONG
2013-01-01
To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed:the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-nearest neighbor method is used to cluster new stage samples into the best-fit cluster. Finally, the daily discharge is estimated. In the estimation process, the interference of irrelevant information can be avoided, so that the precision and efficiency of daily discharge estimation are improved. Observed data from the Luding Hydrological Station were used for testing. The simulation results demonstrate that the precision of this method is high. This provides a new effective method for discharge estimation.
Rapid estimation of nonlinear DSGE models
Hall, Jamie
2012-01-01
This article describes a new approximation method for dynamic stochastic general equilibrium (DSGE) models. The method allows nonlinear models to be estimated efficiently and relatively quickly with the fully-adapted particle filter, without using high-performance parallel computation. The article demonstrates the method by estimating, on US data, a nonlinear New Keynesian model with time-varying volatility.
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...
Isobars and the efficient market hypothesis
Ivanková, Kristýna
2010-01-01
Isobar surfaces, a method for describing the overall shape of multidimensional data, are estimated by nonparametric regression and used to evaluate the efficiency of selected markets based on returns of their stock market indices.
SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION
Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...
Directory of Open Access Journals (Sweden)
Douglas Sampaio Henrique
2005-06-01
Full Text Available Data of 320 animals were obtained from eight comparative slaughter studies performed under tropical conditions and used to estimate the total efficiency of utilization of the metabolizable energy intake (MEI, which varied from 77 to 419 kcal kg-0.75d-1. The provided data also contained direct measures of the recovered energy (RE, which allowed calculating the heat production (HE by difference. The RE was regressed on MEI and deviations from linearity were evaluated by using the F-test. The respective estimates of the fasting heat production and the intercept and the slope that composes the relationship between RE and MEI were 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 and 0.37. Hence, the total efficiency was estimated by dividing the net energy for maintenance and growth by the metabolizable energy intake. The estimated total efficiency of the ME utilization and analogous estimates based on the beef cattle NRC model were employed in an additional study to evaluate their predictive powers in terms of the mean square deviations for both temperate and tropical conditions. The two approaches presented similar predictive powers but the proposed one had a 22% lower mean squared deviation even with its more simplified structure.Foram utilizadas 320 informações obtidas a partir de 8 estudos de abate comparativo conduzidos em condições tropicais para se estimar a eficiência total de utilização da energia metabolizável consumida, a qual variou de 77 a 419kcal kg-0.75d-1. Os dados também continham informações sobre a energia retida (RE, o que permitiu o cálculo da produção de calor por diferença. As estimativas da produção de calor em jejum e dos coeficientes linear e angular da regressão entre RE e MEI foram respectivamente, 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 e 0,37. Em seguida, a eficiência total foi estimada dividindo-se a energia líquida para mantença e produção pelo consumo de energia metabolizável. A eficiência total de
Liu Estimator Based on An M Estimator
Directory of Open Access Journals (Sweden)
Hatice ŞAMKAR
2010-01-01
Full Text Available Objective: In multiple linear regression analysis, multicollinearity and outliers are two main problems. In the presence of multicollinearity, biased estimation methods like ridge regression, Stein estimator, principal component regression and Liu estimator are used. On the other hand, when outliers exist in the data, the use of robust estimators reducing the effect of outliers is prefered. Material and Methods: In this study, to cope with this combined problem of multicollinearity and outliers, it is studied Liu estimator based on M estimator (Liu M estimator. In addition, mean square error (MSE criterion has been used to compare Liu M estimator with Liu estimator based on ordinary least squares (OLS estimator. Results: OLS, Huber M, Liu and Liu M estimates and MSEs of these estimates have been calculated for a data set which has been taken form a study of determinants of physical fitness. Liu M estimator has given the best performance in the data set. It is found as both MSE (?LM = 0.0078< MSE (?M = 0.0508 and MSE (?LM = 0.0078< MSE (?L= 0.0085. Conclusion: When there is both outliers and multicollinearity in a dataset, while using of robust estimators reduces the effect of outliers, it could not solve problem of multicollinearity. On the other hand, using of biased methods could solve the problem of multicollinearity, but there is still the effect of outliers on the estimates. In the occurence of both multicollinearity and outliers in a dataset, it has been shown that combining of the methods designed to deal with this problems is better than using them individually.
DELANNE, Y; VANDANJON, PO
2006-01-01
In board tire/road friction estimation is of current interest in two different framework : - optimization of driver assistance systems efficiency : antilock braking system, electronic stability program, adaptive cruise control, lane departure control, advanced automatic driving, etc., - driver instantaneous warning about the available friction and limits for his possible driving actions. This subject has been the objective of many research programs throughout the world. Four main methods have...
Institute of Scientific and Technical Information of China (English)
田刚; 李南
2011-01-01
Based on the panel data of China＇ s 29 regions during the period of 1991 - 2007, an empirical analysis on the disparity and the exogenous affecting factors of the technical efficiency of logistics industry is condnctod by using a single - stage estimation procedure of the stochastic frontier production function. The results show that the overall technical efficiency is the low, and there are expanding disparities between regions; the proportion of the state - owned economy in fixed assets and government intervention impede the improvement of the efficiency, but the negative impacts are gradually decreasing; human capital and degree of open- ness have the positive effects on the efficiency, in the central and western regions, there is an interaction between lower human capital and degree of openness, which makes a widening gap of the efficiency between these two regions and the east region; with the implementation of west development strategy, the industrial strueture has become a significantly positive affecting factor in the west region; as far as logistics development environment is concerned, the phenomenon of sunken can be found in central region; improvement of logistics has an important meaning to promote the regional coordinative development.%以1991～2（D7年中国大陆29个省级地区面板数据为基础，采用外生性影响因素与随机前沿生产函数模型联合估计的方法（SFA一步法），测算了中国各地区物流业技术效率，考察了人力资本、制度、政府干预、开放程度及产业结构等环境因素对物流业技术效率的影响。主要发现有：考察期间中国物流业技术效率仍处于较低水平，地区间存在差异，且在扩大；政府干预、国有率阻碍技术效率提升，但负面影响在减小；人力资本、开放程度促进技术效率提升，但由于二者存在交互影响使得它们对物流效率的作用在中、西部地区明显弱于东部地区；“西部大开发”战
Efficiency estimation for permanent magnets of synchronous wind generators
Directory of Open Access Journals (Sweden)
Serebryakov A.
2014-02-01
Full Text Available Pastāvīgo magnētu pielietošana vējģeneratoros paver plašas iespējas mazas un vidējas jaudas vēja enerģētisko iekārtu (VEI efektivitātes paaugstināšanai. Turklāt samazinās ģeneratoru masa, palielinās drošums, samazinās ekspluatācijas izmaksas. Tomēr, izmantojot augsti enerģētiskos pastāvīgos magnētus ģeneratoros ar paaugstinātu jaudu, rodas virkne problēmu, kuras sekmīgi iespējams pārvarēt, ja pareizi izvieto magnētus pēc to orientācijas, radot magnētisko lauku elektriskās mašīnas gaisa spraugā. Darbā ir mēģināts pierādīt, ka eksistē būtiskas priekšrocības mazas un vidējas jaudas vējģeneratoros, ja pastāvīgie magnēti tiek magnetizēti tangenciāli attiecībā pret gaisa spraugu.
Stochastic Frontier Estimation of Efficient Learning in Video Games
Hamlen, Karla R.
2012-01-01
Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…
Fast Katz and commuters : efficient estimation of social relatedness.
Energy Technology Data Exchange (ETDEWEB)
On, Byung-Won; Lakshmanan, Laks V. S.; Esfandiar, Pooya; Bonchi, Francesco; Grief, Chen; Gleich, David F.
2010-12-01
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Estimation of efficiency of damping parameters in seismic insulation systems
Yu.L. Rutman; N.V. Kovaleva
2012-01-01
In the design of seismic isolation systems, one of the key and most difficult issues is the damping optimal parameters choice. If the damping is negligible, it is possible (at a certain frequency of external influence) that quasi-resonant processes, which lead to the disappearance of seismic insulation effect, will emerge. If the damping forces are large, it entails a significant load increase on the protected object, which also reduces the effect of seismic insulation.Development technique o...
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G. [ICF Incorporated, LLC., Fairfax, VA (United States)
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies, other relevant attributes based on data from actual production vehicles, and recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G.
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies? other relevant attributes based on data from actual production vehicles and from recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Estimating the Efficiency of Sequels in the Film Industry
Denis Y. Orlov; Evgeniy M. Ozhegov
2015-01-01
Film industry has been under investigation from social scientists for the last 30 years. A lot of the work has been dedicated to the analysis of the sequel effect on film revenue. The current paper employs data on wide releases in the US from 2010 to 2014 and provides a new look at sequel return to the domestic box office. We apply the Heckman and nonparametric sample selection approach in order to control for the non-random nature of the sequels’ sample. It was found that sequels are success...
Using MCMC chain outputs to efficiently estimate Bayes factors
Morey, Richard D.; Rouder, Jeffrey N.; Pratte, Michael S.; Speckman, Paul L.
2011-01-01
One of the most important methodological problems in psychological research is assessing the reasonableness of null models, which typically constrain a parameter to a specific value such as zero. Bayes factor has been recently advocated in the statistical and psychological literature as a principled
Nonparametric Efficiency Analysis for Coffee Farms in Puerto Rico
Gregory, Alexandra; Featherstone, Allen M
2008-01-01
Coffee production in Puerto Rico is labor intensive since harvest is done by hand for quality and topography conditions. Färe's nonparametric approach was used to estimate technical, allocative, scale and overall efficiency measures for coffee farms in Puerto Rico during the 2000 to 2004 period. On average Puerto Rico coffee farms were 46% technically efficient, 79% scale efficient, and 74% allocatively efficient.
Managerial Efficiency and Hospitality Industry: the Portuguese Case
Barros, Carlos Pestana; Botti, Laurent; Peypoch, Nicolas; Solonandrasana, Bernardin
2009-01-01
Abstract In this paper, the innovative two-stage procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Portuguese hotel groups from 1998 to 2005. In the first stage, the hotels' technical efficiency is estimated with DEA in order to establish which hotels have the most efficient performance. These could serve as peers to help improve performance of the least efficient hotels. In the second stage, the Simar and Wilson model is used to bootstrap the ...
THE ECONOMIC EFFICIENCY OF INVESTMENT
Directory of Open Access Journals (Sweden)
SIMONA CRISTINA COSTEA
2012-05-01
Full Text Available Economic efficiency is the main quality factor of economic growth because through it one can achieve an absolute performance enhancement involving the same amount of effort. In a market driven economy efficiency has to be estimated at a microeconomic level as well as with respect to the national economy Investments play a key role in the economy as an intermediate between the production of goods and services and the consumer itself being a factor of influence regarding the demand as well as the offer. Investment provides the financial support for promoting technical-scientific progress in various fields of activity.
OFDM System Channel Estimation with Hidden Pilot
Institute of Scientific and Technical Information of China (English)
YANG Feng; LIN Cheng-yu; ZHANG Wen-jun
2007-01-01
Channel estimation using pilot is common used in OFDM system. The pilot is usually time division multiplexed with the informative sequence. One of the main drawbacks is bandwidth losing. In this paper, a new method was proposed to perform channel estimation in OFDM system. The pilot is arithmetically added to the output of OFDM modulator. Receiver uses the hidden pilot to get an accurate estimation of the channel. Then pilot is removed after channel estimation. The Cramer-Rao lower bound for this method was deprived. The performance of the algorithm is then shown. Compared with traditional methods, the proposed algorithm increases the bandwidth efficiency dramatically.
U.S. CHAIN RESTAURANT EFFICIENCY
Barber, David L.; Byrne, Patrick J.
1997-01-01
The growth of corporate food service firms and the resulting competition places increasing pressures on available resources and their efficient usage. This analysis measures efficiencies for U. S. chain restaurants and determines associations between managerial and operational characteristics. Using a ray-homothetic production function, frontiers were estimated for large and small restaurant chains. Technical and scale efficiencies were then derived for the firms. Finally, a Tobit analysis me...
Ensemble estimators for multivariate entropy estimation
Sricharan, Kumar
2012-01-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the exponent in the MSE rate of convergence decays increasingly slowly as the dimension $d$ of the samples increases. In particular, the rate is often glacially slow of order $O(T^{-{\\gamma}/{d}})$, where $T$ is the number of samples, and $\\gamma>0$ is a rate parameter. Examples of such estimators include kernel density estimators, $k$-NN density estimators, $k$-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted convex combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of $O(T^{-1})$. Furthermore, we show that these optimal weights can be determined by so...
Estimating Coke and Pepsi's Price and Advertising Strategies
Golan, Amos; Karp, Larry S.; Perloff, Jeffrey M.
1998-01-01
A semi-parametric, information-based estimator is used to estimate strategies in prices and advertising for Coca-Cola and Pepsi-Cola. Separate strategies for each firm are estimated with and without restrictions from game theory. These information/entropy estimators are consistent, are efficient, and do not require distributional assumptions. These estimates are used to test theories about the strategies of firms and to see how changes in incomes or factor prices affect these strategies.
ON INTERVAL ESTIMATING REGRESSION
Directory of Open Access Journals (Sweden)
Marcin Michalak
2014-06-01
Full Text Available This paper presents a new look on the well-known nonparametric regression estimator – the Nadaraya-Watson kernel estimator. Though it was invented 50 years ago it still being applied in many fields. After these yearsfoundations of uncertainty theory – interval analysis – are joined with this estimator. The paper presents the background of Nadaraya-Watson kernel estimator together with the basis of interval analysis and shows the interval Nadaraya-Watson kernel estimator.
Energy-efficient cooking methods
Energy Technology Data Exchange (ETDEWEB)
De, Dilip K. [Department of Physics, University of Jos, P.M.B. 2084, Jos, Plateau State (Nigeria); Muwa Shawhatsu, N. [Department of Physics, Federal University of Technology, Yola, P.M.B. 2076, Yola, Adamawa State (Nigeria); De, N.N. [Department of Mechanical and Aerospace Engineering, The University of Texas at Arlington, Arlington, TX 76019 (United States); Ikechukwu Ajaeroh, M. [Department of Physics, University of Abuja, Abuja (Nigeria)
2013-02-15
Energy-efficient new cooking techniques have been developed in this research. Using a stove with 649{+-}20 W of power, the minimum heat, specific heat of transformation, and on-stove time required to completely cook 1 kg of dry beans (with water and other ingredients) and 1 kg of raw potato are found to be: 710 {+-}kJ, 613 {+-}kJ, and 1,144{+-}10 s, respectively, for beans and 287{+-}12 kJ, 200{+-}9 kJ, and 466{+-}10 s for Irish potato. Extensive researches show that these figures are, to date, the lowest amount of heat ever used to cook beans and potato and less than half the energy used in conventional cooking with a pressure cooker. The efficiency of the stove was estimated to be 52.5{+-}2 %. Discussion is made to further improve the efficiency in cooking with normal stove and solar cooker and to save food nutrients further. Our method of cooking when applied globally is expected to contribute to the clean development management (CDM) potential. The approximate values of the minimum and maximum CDM potentials are estimated to be 7.5 x 10{sup 11} and 2.2 x 10{sup 13} kg of carbon credit annually. The precise estimation CDM potential of our cooking method will be reported later.
Directory of Open Access Journals (Sweden)
Aleksandra Y. Grigorevskaya
2012-05-01
Full Text Available The article deals with methods of comprehensive restaurant business performance assessment based on the estimation of both subtotal and total rates of efficiency, and demonstrates the calculation of the above rates.
Energy Technology Data Exchange (ETDEWEB)
Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman
2004-03-30
Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed
Efficiency of municipal legislative chambers
Directory of Open Access Journals (Sweden)
Alexandre Manoel Angelo da Silva
2015-01-01
Full Text Available A novel study of Brazilian city council efficiency using the non-parametric estimator FDH (free disposal hull with bias correction is presented. In regional terms, study results show a concentration of efficient councils in the southern region. In turn, those in the northeastern and southeastern regions are among the most ineffective councils. In these latter two regions, most councils could at least double their outputs while maintaining the same volume of inputs. Regarding population size, for cities with up to 500,000 inhabitants, more than 60% of city councils could at least quadruple their output. Regarding inefficiencies revealed through non-discretionary variables (environmental variables, the study results show a correlation between councilor education levels and city council efficiency.
Empirical likelihood estimation of discretely sampled processes of OU type
Institute of Scientific and Technical Information of China (English)
SUN ShuGuang; ZHANG XinSheng
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
Distributed fusion estimation for sensor networks with communication constraints
Zhang, Wen-An; Song, Haiyu; Yu, Li
2016-01-01
This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
Estimating a Mixed Strategy: United and American Airlines
Golan, Amos; Karp , Larry S.; Perloff, Jeffrey M.
1998-01-01
We develop a generalized maximum entropy estimator that can estimate pure and mixed strategies subject to restrictions from game theory. This method avoids distributional assumptions and is consistent and efficient. We demonstrate this method by estimating the mixed strategies of duopolistic airlines.
More evidence of rational market values for home energy efficiency
Nevin, Rick; Bender, Christopher; Gazan, Heather
1999-01-01
The “cost versus value” survey by Remodeling indicates that realtor value estimates for window replacement can be substantially explained by the market value of energy efficiency, as estimated in “Evidence of Rational Market Values for Home Energy Efficiency,” which appeared in the October 1998 issue of The Appraisal Journal.
Indicators of technological processes environmental estimation
R. Nowosielski; A. Kania; M. Spilka
2007-01-01
Purpose: The paper presents a possibility using indicators of technological processes estimation which make possible decrease negative environmental influence these processes on the environment. Design/methodology/approach: The article shows the direction of enterprises efficiency estimation in favour of environment. It also presents the necessity of production with responsibility of environment. It requires formulating definite aims and justification of workers to integration of the environm...
System on Programable Chip for Performance Estimation of Loom Machine
Directory of Open Access Journals (Sweden)
Gurpreet Singh
2012-03-01
Full Text Available System on programmable chip for the performance estimation of loom machine, which calculates the efficiency and meter count for weaved cloth automatically. Also it calculates the efficiency of loom machine. Previously the same was done using manual process which was not efficient. This article is intended for loom machines which are not modern.
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Information Geometric Density Estimation
Sun, Ke; Marchand-Maillet, Stéphane
2014-01-01
We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.
Software Cost Estimation Review
Ongere, Alphonce
2013-01-01
Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...
Adaptive kernel density estimation
Philippe Van Kerm
2003-01-01
This insert describes the module akdensity. akdensity extends the official kdensity that estimates density functions by the kernel method. The extensions are of two types: akdensity allows the use of an "adaptive kernel" approach with varying, rather than fixed, bandwidths; and akdensity estimates pointwise variability bands around the estimated density functions. Copyright 2003 by Stata Corporation.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The genome length is a fundamental feature of a species. This note outlined the general concept and estimation method of the physical and genetic length. Some formulae for estimating the genetic length were derived in detail. As examples, the genome genetic length of Pinus pinaster Ait. and the genetic length of chromosome Ⅵ of Oryza sativa L. were estimated from partial linkage data.
Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...
International Nuclear Information System (INIS)
Highlights: ► We employ a slacks-based DEA model to estimate the energy efficiency and shadow prices of CO2 emissions in China. ► The empirical study shows that China was not performing CO2-efficiently. ► The average of estimated shadow prices of CO2 emissions is about $7.2. -- Abstract: This paper uses nonparametric efficiency analysis technique to estimate the energy efficiency, potential emission reductions and marginal abatement costs of energy-related CO2 emissions in China. We employ a non-radial slacks-based data envelopment analysis (DEA) model for estimating the potential reductions and efficiency of CO2 emissions for China. The dual model of the slacks-based DEA model is then used to estimate the marginal abatement costs of CO2 emissions. An empirical study based on China’s panel data (2001–2010) is carried out and some policy implications are also discussed.
Ion-by-ion Cooling efficiencies
Gnat, Orly
2011-01-01
We present ion-by-ion cooling efficiencies for low-density gas. We use Cloudy (ver. 08.00) to estimate the cooling efficiencies for each ion of the first 30 elements (H-Zn) individually. We present results for gas temperatures between 1e4 and 1e8K, assuming low densities and optically thin conditions. When nonequilibrium ionization plays a significant role the ionization states deviate from those that obtain in collisional ionization equilibrium (CIE), and the local cooling efficiency at any given temperature depends on specific non-equilibrium ion fractions. The results presented here allow for an efficient estimate of the total cooling efficiency for any ionic composition. We also list the elemental cooling efficiencies assuming CIE conditions. These can be used to construct CIE cooling efficiencies for non-solar abundance ratios, or to estimate the cooling due to elements not explicitly included in any nonequilibrium computation. All the computational results are listed in convenient online tables.
Efficiency of broadband internet adoption in European Union member states
Pavlyuk, Dmitry
2011-01-01
This paper is devoted to econometric analysis of broadband adoption efficiency in EU member states. Stochastic frontier models are widely used for efficiency estimation. We enhanced the stochastic frontier model by adding a spatial component into the model specification to reflect possible dependencies between neighbour countries. A maximum likelihood estimator for the model was developed. The proposed spatial autoregressive stochastic frontier model is used for estimation of broadband ad...
Robust Spectral Estimation of Track Irregularity
Institute of Scientific and Technical Information of China (English)
Fu Wenjuan; Chen Chunjun
2005-01-01
Because the existing spectral estimation methods for railway track irregularity analysis are very sensitive to outliers, a robust spectral estimation method is presented to process track irregularity signals. The proposed robust method is verified using 100 groups of clean/contaminated data reflecting he vertical profile irregularity taken from Bejing-Guangzhou railway with a sampling frequency of 33 data every 10 m, and compared with the Auto Regressive (AR) model. The experimental results show that the proposed robust estimation is resistible to noise and insensitive to outliers, and is superior to the AR model in terms of efficiency, stability and reliability.
Multiregional estimation of gross internal migration flows.
Foot, D K; Milne, W J
1989-01-01
"A multiregional model of gross internal migration flows is presented in this article. The interdependence of economic factors across all regions is recognized by imposing a non-stochastic adding-up constraint that requires total inmigration to equal total outmigration in each time period. An iterated system estimation technique is used to obtain asymptotically consistent and efficient parameter estimates. The model is estimated for gross migration flows among the Canadian provinces over the period 1962-86 and then is used to examine the likelihood of a wash-out effect in net migration models. The results indicate that previous approaches that use net migration equations may not always be empirically justified."
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR...
Hardware Accelerated Power Estimation
Coburn, Joel; Raghunathan, Anand
2011-01-01
In this paper, we present power emulation, a novel design paradigm that utilizes hardware acceleration for the purpose of fast power estimation. Power emulation is based on the observation that the functions necessary for power estimation (power model evaluation, aggregation, etc.) can be implemented as hardware circuits. Therefore, we can enhance any given design with "power estimation hardware", map it to a prototyping platform, and exercise it with any given test stimuli to obtain power consumption estimates. Our empirical studies with industrial designs reveal that power emulation can achieve significant speedups (10X to 500X) over state-of-the-art commercial register-transfer level (RTL) power estimation tools.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Efficiency wages and bargaining
Walsh, Frank
2005-01-01
I argue that in contrast to the literature to date efficiency wage and bargaining solutions will typically be independent. If the bargained wage satisfies the efficiency wage constraint efficiency wages are irrelevant. If it does not, typically we have the efficiency wage solution and bargaining is irrelevant.
Institute of Scientific and Technical Information of China (English)
苏涛; 冯绍元; 徐英
2013-01-01
With Jiefang gate irrigation area of Hetao region in Inner Mongolia as the research district,and the biomass,the soil water and the relation equation between them in the measured values as the research foundation,the authors set up a regional evapotranspiration retrieving model based on the Radiation Use Efficiency (RUE).The SEBAL(surface energy balance algorithm for land) model was taken as the referenced model to make a comparative analysis between the regional evapotranspirations in the same period.The results showed that the spatial distributions of evapotranspiration estimated by using the RUE method and the SEBAL model were similar in spatial distribution and texture features,and there only existed insigficant differences between the calculated results of the two models.The correlation coefficient between the RUE method and the SEBAL model was remarkably improved in comparison with that of the DSSAT (decision support system for agrotechnology transfer) method and the SEBAL model.It is also proved that the regional evapotranspiration can be better retrieved by the RUE method,with the retrieving accuracy obviously higher than that of the DSSAT method,and hence this method is a new and effective method for monitoring regional evapotranspiration.%以内蒙古河套地区解放闸灌域为研究区域,以实测生物量与土壤含水量及其关系方程为研究基础,建立了基于光能利用效率(radiation use efficiency,RUE)的区域蒸散量反演模型;以陆面能量平衡(surface energy balance algorithm for land,SEBAL)模型为参考,对反演的同一时期的区域蒸散量进行对比分析.结果表明:RUE与SEBAL模型反演的区域蒸散量在空间分布和纹理特征方面具有相似性且相关性较高,决定系数高于农业技术推广支持系统(decision support system for agrotechnology transfer,DSSAT)与SEBAL的;基于RUE建立的区域蒸散量反演模型能够较好地反映区域蒸散量,在监测区植被(或作物)单一的前提下,是一种有效方法.
Modified estimators for the change point in hazard function
Karasoy, Durdu; Kadilar, Cem
2009-07-01
We propose the consistent estimators for the change point in hazard function by improving the estimators in [A.P. Basu, J.K. Ghosh, S.N. Joshi, On estimating change point in a failure rate, in: S.S. Gupta, J.O. Berger (Eds.), Statistical Decision Theory and Related Topics IV, vol. 2, Springer-Verlag, 1988, pp. 239-252] and [H.T. Nguyen, G.S. Rogers, E.A. Walker, Estimation in change point hazard rate model, Biometrika 71 (1984) 299-304]. By a simulation study, we show that the proposed estimators are more efficient than the original estimators in many cases.
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exponents in hidden Markov model order estimation(2003)). We propose an algorithm to efficiently compute the estimator and provide simulation studies to support our result.
Efficiency in higher education
Directory of Open Access Journals (Sweden)
Duguleană, C.
2011-01-01
Full Text Available The National Education Law establishes the principles of equity and efficiency in higher education. The concept of efficiency has different meanings according to the types of funding and the time horizons: short or long term management approaches. Understanding the black box of efficiency may offer solutions for an effective activity. The paper presents the parallel presentation of efficiency in a production firm and in a university, for better understanding the specificities of efficiency in higher education.
Efficiency of hospitals in the Czech Republic
Procházková, Jana; Šťastná, Lenka
2011-01-01
The paper estimates cost efficiency of 99 general hospitals in the Czech Republic during 2001-2008 using Stochastic Frontier Analysis. We estimate a baseline model and also a model accounting for various inefficiency determinants. Group-specific inefficiency is present even having taken care of a number of characteristics. We found that inefficiency increases with teaching status, more than 20,000 treated patients a year, not-for-profit status and a larger share of the elderly in the municipa...
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the...
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
In this paper, we propose using realized range-based estimation to draw inference about the quadratic variation of jump-diffusion processes. We also construct a new test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the test...
Interactive inverse kinematics for human motion estimation
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome;
2009-01-01
We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion estimat...
Hopf limit cycles estimation in power systems
Energy Technology Data Exchange (ETDEWEB)
Barquin, J.; Gomez, T.; Pagola, L.F. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica
1995-11-01
This paper addresses the computation of the Hopf limit cycle. This limit cycle is associated to the appearance of an oscillatory instability in dynamical power systems. An algorithm is proposed to estimate the dimensions and shape of this limit cycle. The algorithm is computationally efficient and is able to deal with large power systems. 7 refs, 4 figs, 1 tab
Microfinance, Efficiency and Agricultural Production in Bangladesh
Islam, K. M. Zahidul
2011-01-01
The objectives of this study were to make a detailed and systematic empirical analysis of microfinance borrowers and non-borrowers in Bangladesh and also examine how efficiency measures are influenced by the access to agricultural microfinance. In the empirical analysis, this study used both parametric and non-parametric frontier approaches to investigate differences in efficiency estimates between microfinance borrowers and non-borrowers. This thesis, based on five articles, applied data obt...
Technical Efficiency in Louisiana Sugar Cane Processing
Johnson, Jason L.; Zapata, Hector O.; Heagler, Arthur M.
1995-01-01
Participants in the Louisiana sugar cane industry have provided little information related to the efficiency of sugar processing operations. Using panel data from the population of Louisiana sugar processors, alternative model specifications are estimated using stochastic frontier methods to measure the technical efficiency of individual sugar factories. Results suggest the Louisiana sugar processing industry is characterized by a constant returns to scale Cobb-Douglas processing function wit...
Waldo, Staffan
2007-01-01
While individual data form the base for much empirical analysis in education, this is not the case for analysis of technical efficiency. In this paper, efficiency is estimated using individual data which is then aggregated to larger groups of students. Using an individual approach to technical efficiency makes it possible to carry out studies on a…
Embedding capacity estimation of reversible watermarking schemes
Indian Academy of Sciences (India)
Rishabh Iyer; Rushikesh Borse; Subhasis Chaudhuri
2014-12-01
Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm iteratively updates the co-occurrence matrix at every stage to estimate the multi-pass embedding capacity, and is much more efficient vis-a-vis actual watermarking. We also suggest an extremely efficient, pre-computable tree based implementation which is conceptually similar to the cooccurrence based method, but provides the estimates in a single iteration, requiring a complexity akin to that of single pass capacity estimation. We also provide upper bounds on the embedding capacity.We finally evaluate performance of our algorithms on recent watermarking algorithms.
Golbabaei-Asl, M.; Knight, D.; Wilkinson, S.
2013-01-01
The thermal efficiency of a SparkJet is evaluated by measuring the impulse response of a pendulum subject to a single spark discharge. The SparkJet is attached to the end of a pendulum. A laser displacement sensor is used to measure the displacement of the pendulum upon discharge. The pendulum motion is a function of the fraction of the discharge energy that is channeled into the heating of the gas (i.e., increasing the translational-rotational temperature). A theoretical perfect gas model is used to estimate the portion of the energy from the heated gas that results in equivalent pendulum displacement as in the experiment. The earlier results from multiple runs for different capacitances of C = 3, 5, 10, 20, and 40(micro)F demonstrate that the thermal efficiency decreases with higher capacitive discharges.1 In the current paper, results from additional run cases have been included and confirm the previous results
Generalized Agile Estimation Method
Directory of Open Access Journals (Sweden)
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Corporate Accounting Policy Efficiency Improvement
Directory of Open Access Journals (Sweden)
Elena K. Vorobei
2013-01-01
Full Text Available The article is focused on the issues of efficient use of different methods of tax accounting for the optimization of income tax expenses and their consolidation in corporate accounting policy. The article makes reasoned conclusions, concerning optimal selection of depreciation methods for tax and bookkeeping accounting and their consolidation in corporate accounting policy and consolidation of optimal methods of cost recovery in production, considering business environment. The impact of the selected methods on corporate income tax rates and corporate property tax rates was traced and tax recovery was estimated.
Fast adaptive estimation of multidimensional psychometric functions.
DiMattina, Christopher
2015-01-01
Recently in vision science there has been great interest in understanding the perceptual representations of complex multidimensional stimuli. Therefore, it is becoming very important to develop methods for performing psychophysical experiments with multidimensional stimuli and efficiently estimating psychometric models that have multiple free parameters. In this methodological study, I analyze three efficient implementations of the popular Ψ method for adaptive data collection, two of which are novel approaches to psychophysical experiments. Although the standard implementation of the Ψ procedure is intractable in higher dimensions, I demonstrate that my implementations generalize well to complex psychometric models defined in multidimensional stimulus spaces and can be implemented very efficiently on standard laboratory computers. I show that my implementations may be of particular use for experiments studying how subjects combine multiple cues to estimate sensory quantities. I discuss strategies for speeding up experiments and suggest directions for future research in this rapidly growing area at the intersection of cognitive science, neuroscience, and machine learning. PMID:26200886
Pose estimation and frontal face detection for face recognition
Lim, Eng Thiam; Wang, Jiangang; Xie, Wei; Ronda, Venkarteswarlu
2005-05-01
This paper proposes a pose estimation and frontal face detection algorithm for face recognition. Considering it's application in a real-world environment, the algorithm has to be robust yet computationally efficient. The main contribution of this paper is the efficient face localization, scale and pose estimation using color models. Simulation results showed very low computational load when compare to other face detection algorithm. The second contribution is the introduction of low dimensional statistical face geometrical model. Compared to other statistical face model the proposed method models the face geometry efficiently. The algorithm is demonstrated on a real-time system. The simulation results indicate that the proposed algorithm is computationally efficient.
Modified Maximum Likelihood Estimation from Censored Samples in Burr Type X Distribution
Directory of Open Access Journals (Sweden)
R.R.L. Kantam
2015-12-01
Full Text Available The two parameter Burr type X distribution is considered and its scale parameter is estimated from a censored sample using the classical maximum likelihood method. The estimating equations are modified to get simpler and efficient estimators. Two methods of modification are suggested. The small sample efficiencies are presented.
Coordination of Energy Efficiency and Demand Response
Energy Technology Data Exchange (ETDEWEB)
Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison
2010-01-29
This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.
Directory of Open Access Journals (Sweden)
Sidi Ali Ould Abdi
2011-01-01
Full Text Available Given a stationary multidimensional spatial process (i=(i,i∈ℝ×ℝ,i∈ℤ, we investigate a kernel estimate of the spatial conditional quantile function of the response variable i given the explicative variable i. Asymptotic normality of the kernel estimate is obtained when the sample considered is an -mixing sequence.
Rajdl, Kamil; Lansky, Petr
2014-02-01
Fano factor is one of the most widely used measures of variability of spike trains. Its standard estimator is the ratio of sample variance to sample mean of spike counts observed in a time window and the quality of the estimator strongly depends on the length of the window. We investigate this dependence under the assumption that the spike train behaves as an equilibrium renewal process. It is shown what characteristics of the spike train have large effect on the estimator bias. Namely, the effect of refractory period is analytically evaluated. Next, we create an approximate asymptotic formula for the mean square error of the estimator, which can also be used to find minimum of the error in estimation from single spike trains. The accuracy of the Fano factor estimator is compared with the accuracy of the estimator based on the squared coefficient of variation. All the results are illustrated for spike trains with gamma and inverse Gaussian probability distributions of interspike intervals. Finally, we discuss possibilities of how to select a suitable observation window for the Fano factor estimation. PMID:24245675
DEFF Research Database (Denmark)
Bollerslev, Tim; Todorov, Victor
We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes ...
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...... estimator automatically compensates for the axial velocity, when determining the transverse velocity by using fourth order moments rather than second order moments. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by using averaging...... of RF samples. Further, compensation for the axial velocity can be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce spatial velocity dispersion....
Estimating Resilience Across Landscapes
Directory of Open Access Journals (Sweden)
Garry D. Peterson
2002-06-01
Full Text Available Although ecological managers typically focus on managing local or regional landscapes, they often have little ability to control or predict many of the large-scale, long-term processes that drive changes within these landscapes. This lack of control has led some ecologists to argue that ecological management should aim to produce ecosystems that are resilient to change and surprise. Unfortunately, ecological resilience is difficult to measure or estimate in the landscapes people manage. In this paper, I extend system dynamics approaches to resilience and estimate resilience using complex landscape simulation models. I use this approach to evaluate cross-scale edge, a novel empirical method for estimating resilience based on landscape pattern. Cross-scale edge provides relatively robust estimates of resilience, suggesting that, with some further development, it could be used as a management tool to provide rough and rapid estimates of areas of resilience and vulnerability within a landscape.
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the...... parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step to...
DEFF Research Database (Denmark)
2015-01-01
A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the...... communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...
Air transportation energy efficiency
Williams, L. J.
1977-01-01
The energy efficiency of air transportation, results of the recently completed RECAT studies on improvement alternatives, and the NASA Aircraft Energy Efficiency Research Program to develop the technology for significant improvements in future aircraft were reviewed.
Gardiner, John Corby
The electric power industry market structure has changed over the last twenty years since the passage of the Public Utility Regulatory Policies Act (PURPA). These changes include the entry by unregulated generator plants and, more recently, the deregulation of entry and price in the retail generation market. Such changes have introduced and expanded competitive forces on the incumbent electric power plants. Proponents of this deregulation argued that the enhanced competition would lead to a more efficient allocation of resources. Previous studies of power plant technical and allocative efficiency have failed to measure technical and allocative efficiency at the plant level. In contrast, this study uses panel data on 35 power plants over 59 years to estimate technical and allocative efficiency of each plant. By using a flexible functional form, which is not constrained by the assumption that regulation is constant over the 59 years sampled, the estimation procedure accounts for changes in both state and national regulatory/energy policies that may have occurred over the sample period. The empirical evidence presented shows that most of the power plants examined have operated more efficiently since the passage of PURPA and the resultant increase of competitive forces. Chapter 2 extends the model used in Chapter 1 and clarifies some issues in the efficiency literature by addressing the case where homogeneity does not hold. A more general model is developed for estimating both input and output inefficiency simultaneously. This approach reveals more information about firm inefficiency than the single estimation approach that has previously been used in the literature. Using the more general model, estimates are provided on the type of inefficiency that occurs as well as the cost of inefficiency by type of inefficiency. In previous studies, the ranking of firms by inefficiency has been difficult because of the cardinal and ordinal differences between different types of
Barriers to Industrial Energy Efficiency - Study (Appendix A), June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This study examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This study also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
Barriers to Industrial Energy Efficiency - Report to Congress, June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This report examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This report also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
EFFICIENCY ASSESSMENT OF THE PERSONNEL AUDIT AT THE MODERN ENTERPRISE
E. V. Maslova; Tsvetkova, E. V.
2015-01-01
The article considers the main criteria in assessment of ensuring efficiency of carrying out personnel audit, structure of auditor risks when carrying out personnel audit, and also a complex technique of efficiency assessment of personnel audit on the basis of three methodical approaches: a method of expert estimation, a method of economic efficiency assessment with application of the factorial analysis and a rang method of the efficiency assessment of the administrative processes.
Reconsidering energy efficiency
International Nuclear Information System (INIS)
Energy and environmental policies are reconsidering energy efficiency. In a perfect market, rational and well informed consumers reach economic efficiency which, at the given prices of energy and capital, corresponds to physical efficiency. In the real world, market failures and cognitive frictions distort the consumers from perfectly rational and informed choices. Green incentive schemes aim at balancing market failures and directing consumers toward more efficient goods and services. The problem is to fine tune the incentive schemes
DETERMINANTS OF TECHNICAL EFFICIENCY ON PINEAPPLE FARMING
Directory of Open Access Journals (Sweden)
Nor Diana Mohd Idris
2013-01-01
Full Text Available This study analyzes the pineapple production efficiency of the Integrated Agricultural Development Project (IADP in Samarahan, Sarawak, Malaysia and also studies its determinants. In the study area, IADP plays an important role in rural development as a poverty alleviation program through agricultural development. Despite the many privileges received by the farmers, especially from the government, they are still less efficient. This study adopts the Data Envelopment Analysis (DEA in measuring technical efficiency. Further, this study aims to examine the determinants of efficiency by estimating the level of farmer characteristics as a function of farmerâs age, education level, family labor, years of experience in agriculture, society members and farm size. The estimation used the Tobit Model. The results from this study show that the majority of farmers in IADP are still less efficient. In addition, the results show that relying on family labor, the years of experience in agriculture and also participation as the associationâs member are all important determinants of the level of efficiency for the IADP farmers in the agricultural sector. Increasing agriculture productivity can also guarantee the achievement of a more optimal sustainable living in an effort to increase the farmersâ income. Such information is valuable for extension services and policy makers since it can help to guide policies toward increased efficiency among pineapple farmers in Malaysia.
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Making energy efficiency happen
Hirst, E.
1991-04-01
Improving energy efficiency is the least expensive and most effective way to address simultaneously several national issues. Improving efficiency saves money for consumers, increases economic productivity and international competitiveness, enhances national security by lowering oil imports, and reduces the adverse environmental effects of energy production. This paper discusses some of the many opportunities to improve efficiency, emphasizing the roles of government and utilities.
Energy efficiency; Efficacite energetique
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-06-15
This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)
On Local and Nonlocal Measures of Efficiency
Kallenberg, Wilbert C.M.; Ledwina, Teresa
1987-01-01
General results on the limiting equivalence of local and nonlocal measures of efficiency are obtained. Why equivalence occurs in so many testing and estimation problems is clarified. Uniformity of the convergence is a key point. The concepts of Frechet- and Hadamard-type differentiability, which imp
The energy efficiency of lead selfsputtering
DEFF Research Database (Denmark)
Andersen, Hans Henrik
1968-01-01
The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics...
Waterproofing of facades - an efficient measure
Energy Technology Data Exchange (ETDEWEB)
Franke, L.; Kittl, R.; Witt, S.
1987-09-01
A discussion follows on the possibilities to estimate the water absorption of masonry facades exposed to pelting rain and the subsequent drying behaviour. In addition, proposals are given on how to check the efficiency of waterproofing carried out at masonry facades, particularly with regard to joint permeabilities which might still exist.
Multidimensional kernel estimation
Milosevic, Vukasin
2015-01-01
Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.
Pore Velocity Estimation Uncertainties
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
Christensen, Mads
2009-01-01
Periodic signals can be decomposed into sets of sinusoids having frequencies that are integer multiples of a fundamental frequency. The problem of finding such fundamental frequencies from noisy observations is important in many speech and audio applications, where it is commonly referred to as pitch estimation. These applications include analysis, compression, separation, enhancement, automatic transcription and many more. In this book, an introduction to pitch estimation is given and a number of statistical methods for pitch estimation are presented. The basic signal models and associated es
Ahmad, Mukhtar
2012-01-01
State estimation is one of the most important functions in power system operation and control. This area is concerned with the overall monitoring, control, and contingency evaluation of power systems. It is mainly aimed at providing a reliable estimate of system voltages. State estimator information flows to control centers, where critical decisions are made concerning power system design and operations. This valuable resource provides thorough coverage of this area, helping professionals overcome challenges involving system quality, reliability, security, stability, and economy.Engineers are
Robust global motion estimation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A global motion estimation method based on robust statistics is presented in this paper. By using tracked feature points instead of whole image pixels to estimate parameters the process speeds up. To further speed up the process and avoid numerical instability, an alterative description of the problem is given, and three types of solution to the problem are compared. By using a two step process, the robustness of the estimator is also improved. Automatic initial value selection is an advantage of this method. The proposed approach is illustrated by a set of examples, which shows good results with high speed.
Shrinkage Estimators for Covariance Matrices
Daniels, Michael J.; Kass, Robert E.
2001-01-01
Estimation of covariance matrices in small samples has been studied by many authors. Standard estimators, like the unstructured maximum likelihood estimator (ML) or restricted maximum likelihood (REML) estimator, can be very unstable with the smallest estimated eigenvalues being too small and the largest too big. A standard approach to more stably estimating the matrix in small samples is to compute the ML or REML estimator under some simple structure that involves estimation of fewer paramet...
Revisiting energy efficiency fundamentals
Energy Technology Data Exchange (ETDEWEB)
Perez-Lombard, L.; Velazquez, D. [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville (Spain); Ortiz, J. [Building Research Establishment (BRE), Garston, Watford, WD25 9XX (United Kingdom)
2013-05-15
Energy efficiency is a central target for energy policy and a keystone to mitigate climate change and to achieve a sustainable development. Although great efforts have been carried out during the last four decades to investigate the issue, focusing into measuring energy efficiency, understanding its trends and impacts on energy consumption and to design effective energy efficiency policies, many energy efficiency-related concepts, some methodological problems for the construction of energy efficiency indicators (EEI) and even some of the energy efficiency potential gains are often ignored or misunderstood, causing no little confusion and controversy not only for laymen but even for specialists. This paper aims to revisit, analyse and discuss some efficiency fundamental topics that could improve understanding and critical judgement of efficiency stakeholders and that could help in avoiding unfounded judgements and misleading statements. Firstly, we address the problem of measuring energy efficiency both in qualitative and quantitative terms. Secondly, main methodological problems standing in the way of the construction of EEI are discussed, and a sequence of actions is proposed to tackle them in an ordered fashion. Finally, two key topics are discussed in detail: the links between energy efficiency and energy savings, and the border between energy efficiency improvement and renewable sources promotion.
Efficient statistical classification of satellite measurements
Mills, Peter
2012-01-01
Supervised statistical classification is a vital tool for satellite image processing. It is useful not only when a discrete result, such as feature extraction or surface type, is required, but also for continuum retrievals by dividing the quantity of interest into discrete ranges. Because of the high resolution of modern satellite instruments and because of the requirement for real-time processing, any algorithm has to be fast to be useful. Here we describe an algorithm based on kernel estimation called Adaptive Gaussian Filtering that incorporates several innovations to produce superior efficiency as compared to three other popular methods: k-nearest-neighbour (KNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). This efficiency is gained with no compromises: accuracy is maintained, while estimates of the conditional probabilities are returned. These are useful not only to gauge the accuracy of an estimate in the absence of its true value, but also to re-calibrate a retrieved image and...
Image Enhancement with Statistical Estimation
Directory of Open Access Journals (Sweden)
Soumen Kanrar
2012-05-01
Full Text Available Contrast enhancement is an important area of research for the image analysis. Over the decade, the researcher worked on this domain to develop an efficient and adequate algorithm. The proposed method will enhance the contrast of image using Binarization method with the help of Maximum Likelihood Estimation (MLE. The paper aims to enhance the image contrast of bimodal and multi-modal images. The proposed methodology use to collect mathematical information retrieves from the image. In this paper, we are using binarization method that generates the desired histogram by separating image nodes. It generates the enhanced image using histogram specification with binarization method. The proposed method has showed an improvement in the image contrast enhancement compare with the other image.
Semi-blind Channel Estimator for OFDM-STC
Institute of Scientific and Technical Information of China (English)
WU Yun; LUO Han-wen; SONG Wen-tao; HUANG Jian-guo
2007-01-01
Channel state information of OFDM-STC system is required for maximum likelihood decoding. A subspace-based semi-blind method was proposed for estimating the channels of OFDM-STC systems. The channels are first estimated blindly up to an ambiguity parameter utilizing the nature structure of STC, irrespective of the underlying signal constellations. Furthermore, a method was proposed to resolve the ambiguity by using a few pilot symbols. The simulation results show the proposed semi-blind estimator can achieve higher spectral efficiency and provide improved estimation performance compared to the non-blind estimator.
Estimation of food consumption
Energy Technology Data Exchange (ETDEWEB)
Callaway, J.M. Jr.
1992-04-01
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Estimating exponential scheduling preferences
DEFF Research Database (Denmark)
Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid;
the travel time is random, Noland and Small (1995) suggested using expected utility theory to derive the reduced form of expected travel time cost that includes the cost of TTV. For the α-β-γ formulation of scheduling preferences and exponential or uniform distribution of travel time, Noland and Small (1995......, and consider how it should be normalised across individuals. The resulting discrete choice model is thus a combined model for mode and departure time choice. We use it to estimate the preference parameters of H and W and to compare the goodness-of-fit of the constant-exponential and constant...... convergence problems. However, the signs of the obtained parameter estimates are consistent with theory. We have no problems estimating the simpler constant-affine specification, where all parameter estimates have signs consistent with theory and low standard errors. Whether the models with the additivity...
Transverse Spectral Velocity Estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2014-01-01
A transverse oscillation (TO)-based method for calculating the velocity spectrum for fully transverse flow is described. Current methods yield the mean velocity at one position, whereas the new method reveals the transverse velocity spectrum as a function of time at one spatial location. A convex...... array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile...... flow using the Womersly–Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer...
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.;
either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake “calibrating adjustments” to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments that...
Redundancy of Exchangeable Estimators
Directory of Open Access Journals (Sweden)
Narayana P. Santhanam
2014-10-01
Full Text Available Exchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings. On the other hand, the notion of the pattern of a sequence provides an information-theoretic framework for data compression in large alphabet scenarios. Because data compression and parameter estimation are intimately related, we study the redundancy of Bayes estimators coming from Poisson–Dirichlet priors (or “Chinese restaurant processes” and the Pitman–Yor prior. This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression. In particular, we identify relations between alphabet sizes and sample sizes where the redundancy is small, thereby characterizing useful regimes for these estimators.
Estimation of food consumption
International Nuclear Information System (INIS)
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project
Bridged Race Population Estimates
U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...
Estimating Equity Risk Premiums
Aswath Damodaran
1999-01-01
Equity risk premiums are a central component of every risk and return model in finance. Given their importance, it is surprising how haphazard the estimation of equity risk premiums remains in practice. The standard approach to estimating equity risk premiums remains the use of historical returns, with the difference in annual returns on stocks and bonds over a long time period comprising the expected risk premium, looking forward. We note the limitations of this approach, even in markets lik...
Corpetti, Thomas; Memin, Etienne; Pérez, Patrick
2000-01-01
In this paper we address the problem of estimating and analyzing the motion in image sequences showing fluid phenomenon. Due to the great deal of spatial and temporal distortions that luminance patterns exhibit in images of fluid, standard techniques from Computer Vision, originally designed for quasi-rigid motions with stable salient features, are not well adapted in this context. In that prospect, we investigate a dedicated energy-based motion estimator. The considered functional includes a...
Estimating achievement from fame
Simkin, M. V.; Roychowdhury, V. P.
2009-01-01
We report a method for estimating people's achievement based on their fame. Earlier we discovered (cond-mat/0310049) that fame of fighter pilot aces (measured as number of Google hits) grows exponentially with their achievement (number of victories). We hypothesize that the same functional relation between achievement and fame holds for other professions. This allows us to estimate achievement for professions where an unquestionable and universally accepted measure of achievement does not exi...
1975-01-01
The capital cost estimate for the nuclear process heat source (NPHS) plant was made by: (1) using costs from the current commercial HTGR for electricity production as a base for items that are essentially the same and (2) development of new estimates for modified or new equipment that is specifically for the process heat application. Results are given in tabular form and cover the total investment required for each process temperature studied.
Cox, Brian
1995-01-01
COSTIT computer program estimates cost of electronic design by reading item-list file and file containing cost for each item. Accuracy of cost estimate based on accuracy of cost-list file. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. The Sun version (NPO-19587). PC version (NPO-19157).
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
a regression model based on the quality of its predictions. In exploring the econometric issues, the objective of this study was to estimate a cost function in order to estimate the annual health care cost of dementia. Using different models, health care costs were regressed on the degree of dementia, sex, age...... on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e.......g. for a mildly demented patient, the estimated annual health care costs varied from DKK 71 273 to DKK 90 940 (US$ 1 = DKK 7) depending on which model was chosen. For the two-part model, the estimated health care costs ranged from DKK 44714, for a very mildly demented patient, to DKK 197 840, for a severely...
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege;
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p...... allows to construct asymptotic confidence intervals....
Estimation of waves and ship responses using onboard measurements
DEFF Research Database (Denmark)
Montazeri, Najmeh
This thesis focuses on estimation of waves and ship responses using ship-board measurements. This is useful for development of operational safety and performance efficiency in connection with the broader concept of onboard decision support systems. Estimation of sea state is studied using a set...
Adaptative Multigrid and Variable Parameterization for Optical-flow Estimation
Memin, Etienne; Pérez, Patrick
1997-01-01
We investigate the use of adaptative multigrid minimization algorithms for the estimation of the apparent motion field. The proposed approach provides a coherent and efficient framework for estimating piecewise smooth flow fields for different parameterizations relative to adaptative partitions of the image. The performances of the resulting algorithms are demonstrated in the difficult context of a non convex global energy formulation.
Estimation of scale parameters of logistic distribution by linear functions of sample quantiles
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The large sample estimation of standard deviation of logistic distribution employs the asymptotically best linear unbiased estimators based on sample quantiles. The sample quantiles are established from a pair of single spacing. Finally, a table of the variances and efficiencies of the estimator for 5 ≤ n ≤ 65 is provided and comparison is made with other linear estimators.
Coal Moisture Estimation in Power Plant Mills
DEFF Research Database (Denmark)
Andersen, Palle; Bendtsen, Jan Dimon; Pedersen, Tom S.;
2009-01-01
Knowledge of moisture content in raw coal feed to a power plant coal mill is of importance for efficient operation of the mill. The moisture is commonly measured approximately once a day using offline chemical analysis methods; however, it would be advantageous for the dynamic operation of the...... plant if an on-line estimate were available. In this paper we such propose an on-line estimator (an extended Kalman filter) that uses only existing measurements. The scheme is tested on actual coal mill data collected during a one-month operating period, and it is found that the daily measured moisture...
Adaptive vehicle motion estimation and prediction
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.