Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
On Asymptotically Efficient Estimation in Semiparametric Models
Schick, Anton
1986-01-01
A general method for the construction of asymptotically efficient estimates in semiparametric models is presented. It improves and modifies Bickel's (1982) construction of adaptive estimates and obtains asymptotically efficient estimates under conditions weaker than those in Bickel.
Econometric Analysis on Efficiency of Estimator
Khoshnevisan, M.; Kaymram, F.; Singh, Housila P.; Singh, Rajesh; Smarandache, Florentin
2003-01-01
This paper investigates the efficiency of an alternative to ratio estimator under the super population model with uncorrelated errors and a gamma-distributed auxiliary variable. Comparisons with usual ratio and unbiased estimators are also made.
Management systems efficiency estimation in tourism organizations
Alexandra I. Mikheyeva
2011-01-01
The article is concerned with management systems efficiency estimation in tourism organizations, examines effective management systems requirements and characteristics in tourism organizations and takes into account principles of management systems formation.
Efficiently adapting graphical models for selectivity estimation
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
to performing cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without...... a significant loss in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental...... results indicate that estimation errors can be greatly reduced, leading to orders of magnitude more efficient query execution plans in many cases. Optimization time is kept in the range of tens of milliseconds, making this a practical approach for industrial-strength query optimizers....
Efficient estimation of price adjustment coefficients
Lyhagen, Johan
1999-01-01
The price adjustment coefficient model of Amihud and Mendelson (1987) is shown to be suitable for estimation by the Kalman filter. A techique that, under some commonly used conditions, is asymptotically efficient. By Monte Carlo simulations it is shown that both bias and mean squared error are much smaler compared to the estimator proposed by Damodaran and Lim (1991) and Damodaran (1993). A test for the adeqacy of the model is also proposed. Using data from four minor, the nordic countries ex...
An efficient estimator for Gibbs random fields
Janžura, Martin
2014-01-01
Roč. 50, č. 6 (2014), s. 883-895. ISSN 0023-5954 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Gibbs random field * efficient estimator * empirical estimator Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/SI/janzura-0441325.pdf
Efficient uncertainty analysis using optimal statistical estimator
When performing best estimate calculations uncertainty needs to be quantified. In the world different uncertainty methods were developed for uncertainty evaluation. In the present work an optimal statistical estimator algorithm was adapted, extended and used for response surface generation. The objectives of the study was to demonstrate its applicability for uncertainty evaluation of single value or continuous valued parameters. The results showed that optimal statistical estimator is efficient in predicting lower and upper uncertainty bounds. This makes possibility to apply CsA method for uncertainty evaluation of any kind of transient.(author)
Holzegel, Gustav
2016-01-01
We generalize our unique continuation results recently established for a class of linear and nonlinear wave equations $\\Box_g \\phi + \\sigma \\phi = \\mathcal{G} ( \\phi, \\partial \\phi )$ on asymptotically anti-de Sitter (aAdS) spacetimes to aAdS spacetimes admitting non-static boundary metrics. The new Carleman estimates established in this setting constitute an essential ingredient in proving unique continuation results for the full nonlinear Einstein equations, which will be addressed in forthcoming papers. Key to the proof is a new geometrically adapted construction of foliations of pseudoconvex hypersurfaces near the conformal boundary.
Higher Efficiency of Motion Estimation Methods
J. Gamec
2004-12-01
Full Text Available This paper presents a new motion estimation algorithm to improve theperformance of the existing searching algorithms at a relative lowcomputational cost. We try to amend the incorrect and/or inaccurateestimate of motion with higher precision by using adaptive weightedmedian filtering and its modifications. The median filter iswell-known. A more general filter, called the Adaptively WeightedMedian Filter (AWM, of which the median filter is a special case, isdescribed. The submitted modifications conditionally use the AWM andfull search algorithm (FSA. Simulation results show that the proposedtechnique can efficiently improve the motion estimation performance.
Optimal estimator for assessing landslide model efficiency
J. C. Huang
2006-06-01
Full Text Available The often-used success rate (SR in measuring cell-based landslide model efficiency is based on the ratio of successfully predicted unstable cells over total actual landslide sites without considering the performance in predicting stable cells. We proposed a modified SR (MSR, in which we include the performance of stable cell prediction. The goal and virtue of MSR is to avoid over-prediction while upholding stable sensitivity throughout all simulated cases. Landslide susceptibility maps (a total of 3969 cases with full range of performance (from worse to perfect in stable and unstable cell predictions are created and used to probe how estimators respond to model results in calculating efficiency. The kappa method used for satellite image analysis is drawn for comparison. Results indicate that kappa is too stern for landslide modeling giving very low efficiency values in 90% simulated cases. The old SR tends to give high model efficiency under certain conditions yet with significant over-prediction. To examine the capability of MSR and the differences between SR and MSR as performance indicator, we applied the SHALSTAB model onto a mountainous watershed in Taiwan. Despite the fact the best model result deduced by SR projects 120 hits over 131 actual landslide sites, this high efficiency is only obtained when unstable cells cover an incredibly high percentage (75% of the entire watershed. By contrast, the best simulation indicated by MSR projects 83 hits over 131 actual landslide sites while unstable cells only cover 16% of the studied watershed.
Efficient estimation of rare-event kinetics
Trendelkamp-Schroer, Benjamin
2014-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the proces...
Efficient Estimation of Rare-Event Kinetics
Trendelkamp-Schroer, Benjamin; Noé, Frank
2016-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively, but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general, as it does not invoke any specific dynamical model and can provide accurate estimates of the rare-event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the process occurs more rapidly than its reverse, making the approach especially attractive for downhill processes such as folding and binding in biomolecules. Our method is implemented in the PyEMMA software.
ON THE UNBIASED ESTIMATOR OF THE EFFICIENT FRONTIER
OLHA BODNAR; TARAS BODNAR
2010-01-01
In the paper, we derive an unbiased estimator of the efficient frontier. It is shown that the suggested estimator corrects the overoptimism of the sample efficient frontier documented in Siegel and Woodgate (2007). Moreover, an exact F-test on the efficient frontier is presented.
Efficient Estimating Functions for Stochastic Differential Equations
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over a...
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Bing Liu; Zhen Chen; Xiangdong Liu; Fan Yang
2014-01-01
Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is perfor...
Higher Efficiency of Motion Estimation Methods
J. Gamec; Marchevsky, S.; Gamcova, M.
2004-01-01
This paper presents a new motion estimation algorithm to improve the performance of the existing searching algorithms at a relative low computational cost. We try to amend the incorrect and/or inaccurate estimate of motion with higher precision by using adaptive weighted median filtering and its modifications. The median filter is well-known. A more general filter, called the Adaptively Weighted Median Filter (AWM), of which the median filter is a special case, is described. The submitted mod...
Quantum enhanced estimation of optical detector efficiencies
Barbieri Marco
2016-01-01
Full Text Available Quantum mechanics establishes the ultimate limit to the scaling of the precision on any parameter, by identifying optimal probe states and measurements. While this paradigm is, at least in principle, adequate for the metrology of quantum channels involving the estimation of phase and loss parameters, we show that estimating the loss parameters associated with a quantum channel and a realistic quantum detector are fundamentally different. While Fock states are provably optimal for the former, we identify a crossover in the nature of the optimal probe state for estimating detector imperfections as a function of the loss parameter using Fisher information as a benchmark. We provide theoretical results for on-off and homodyne detectors, the most widely used detectors in quantum photonics technologies, when using Fock states and coherent states as probes.
How efficient is estimation with missing data?
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai;
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data...... train a Gaussian mixture model (GMM). We test the trained GMM for two cases, in which test dataset is missing or complete. The results show that CEM is the most efficient method in both cases while MI is the worst performer of the three. PW and CEM proves to be more stable, in particular for higher MDP...
Fast and Statistically Efficient Fundamental Frequency Estimation
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2016-01-01
parametric methods are much more costly to run. In this paper, we propose an algorithm which significantly reduces the cost of an accurate maximum likelihood-based estimator for real-valued data. The speed up is obtained by exploiting the matrix structure of the problem and by using a recursive solver. Via...
Optimal estimator for assessing landslide model efficiency
Huang, J C; S. J. Kao
2006-01-01
The often-used success rate (SR) in measuring cell-based landslide model efficiency is based on the ratio of successfully predicted unstable cells over total actual landslide sites without considering the performance in predicting stable cells. We proposed a modified SR (MSR), in which we include the performance of stable cell prediction. The goal and virtue of MSR is to avoid over-prediction while upholding stable sensitivity throughout all simulated cases. Landslide susceptibility maps (a t...
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Bing Liu
2014-01-01
Full Text Available Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is performed in four stages. First, the local attitude estimation error system is described by a polytopic linear model. Then the local error attitude estimator is designed with constant coefficients based on the robust H2 filtering algorithm. Subsequently, the attitude predictions and the local error attitude estimations are calculated by a gyro based model and the local error attitude estimator. Finally, the attitude estimations are updated by the predicted attitude with the local error attitude estimations. Since the local error attitude estimator is with constant coefficients, it does not need to calculate the matrix inversion for the filter gain matrix or update the Jacobian matrixes online to obtain the local error attitude estimations. As a result, the computational complexity of the proposed attitude estimator reduces significantly. Simulation results demonstrate the efficiency of the proposed attitude estimation strategy.
Efficient estimation for ergodic diffusions sampled at high frequency
Sørensen, Michael
estimators of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in......A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...... the sense of Godambe and Heyde are shown to be give rate optimal and efficient estimators under weak conditions....
Efficient estimation for high similarities using odd sketches
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector......Estimating set similarity is a central problem in many computer applications. In this paper we introduce the Odd Sketch, a compact binary sketch for estimating the Jaccard similarity of two sets. The exclusive-or of two sketches equals the sketch of the symmetric difference of the two sets. This...... comparison. We present a theoretical analysis of the quality of estimation to guarantee the reliability of Odd Sketch-based estimators. Our experiments confirm this efficiency, and demonstrate the efficiency of Odd Sketches in comparison with $b$-bit minwise hashing schemes on association rule learning and...
PFP total operating efficiency calculation and basis of estimate
The purpose of the Plutonium Finishing Plant (PFP) Total Operating Efficiency Calculation and Basis of Estimate document is to provide the calculated value and basis of estimate for the Total Operating Efficiency (TOE) for the material stabilization operations to be conducted in 234-52 Building. This information will be used to support both the planning and execution of the Plutonium Finishing Plant (PFP) Stabilization and Deactivation Project's (hereafter called the Project) resource-loaded, integrated schedule
Efficient estimation of functionals in nonparametric boundary models
Reiß, Markus; Selk, Leonie
2014-01-01
For nonparametric regression with one-sided errors and a boundary curve model for Poisson point processes we consider the problem of efficient estimation for linear functionals. The minimax optimal rate is obtained by an unbiased estimation method which nevertheless depends on a H\\"older condition or monotonicity assumption for the underlying regression or boundary function. We first construct a simple blockwise estimator and then build up a nonparametric maximum-likelihood approach for expon...
Extrapolated HPGe efficiency estimates based on a single calibration measurement
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V0. Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V0, and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L] ± 1/2 [element-of h - element-of L] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V0, causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
Robust and efficient estimation with weighted composite quantile regression
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Phytoremediation: realistic estimation of modern efficiency and future possibility
Kinetic peculiarities of the radionuclides migration in the system 'soil-plant' of the Chernobyl region have been investigated by means of numerical modelling. Quantitative estimation of half-time of natural cleaning of soil has been realised. Potential possibility and efficiency of the modem phytoremediation technology has been estimated. Outlines of the general demands and future possibility of biotechnology of the phytoremediation creation have been formulated. (author)
Improving Woody Biomass Estimation Efficiency Using Double Sampling
B. Scott Shouse; Lhotka, John M.; Songlin Fei; Parrott, David L.
2012-01-01
Although double sampling has been shown to be an effective method to estimate timber volume in forest inventories, only a limited body of research has tested the effectiveness of double sampling on forest biomass estimation. From forest biomass inventories collected over 9,683 ha using systematic point sampling, we examined how a double sampling scheme would have affected precision and efficiency in these biomass inventories. Our results indicated that double sample methods would have yielded...
Technical Efficiency Estimation of Rice Production in South Korea
Mohammed, Rezgar; Saghaian, Sayed
2014-01-01
This paper uses stochastic frontier production function to estimates the technical efficiency of rice production in South Korea. Data from eight provinces have been taken between 1993 and 2012. The purpose of this study is to realize whether the agricultural policy made by the Korean government achieved a high technical efficiency in rice production and also to figure out the variables that could decrease a technical inefficiency in rice production. The study showed there is a possibility to ...
Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism
Custer Adrian V
2005-08-01
Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to
Improving Woody Biomass Estimation Efficiency Using Double Sampling
B. Scott Shouse
2012-05-01
Full Text Available Although double sampling has been shown to be an effective method to estimate timber volume in forest inventories, only a limited body of research has tested the effectiveness of double sampling on forest biomass estimation. From forest biomass inventories collected over 9,683 ha using systematic point sampling, we examined how a double sampling scheme would have affected precision and efficiency in these biomass inventories. Our results indicated that double sample methods would have yielded biomass estimations with similar precision as systematic point sampling when the small sample was ≥ 20% of the large sample. When the small to large sample time ratio was 3:1, relative efficiency (a combined measure of time and precision was highest when the small sample was a 30% subsample of the large sample. At a 30% double sample intensity, there was a < 3% deviation from the original percent margin of error and almost half the required time. Results suggest that double sampling can be an efficient tool for natural resource managers to estimate forest biomass.
Deductive derivation and turing-computerization of semiparametric efficient estimation.
Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan
2015-12-01
Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
Recent estimates of energy efficiency potential in the USA
Sreedharan, P. [Energy and Environmental Economics E3, 101 Montgomery Street, 16th Floor, San Francisco, CA 94104 (United States)
2013-08-15
Understanding the potential for reducing energy demand through increased end-use energy efficiency can inform energy and climate policy decisions. However, if potential estimates are vastly different, they engender controversial debates, clouding the usefulness of energy efficiency in shaping a clean energy future. A substantive question thus arises: is there a general consensus on the potential estimates? To answer this question, this paper reviews recent studies of US national and regional energy efficiency potential in buildings and industry. Although these studies are based on differing assumptions, methods, and data, they suggest technically possible reductions of circa 25-40 % in electricity demand and circa 30 % in natural gas demand in 2020 and economic reductions of circa 10-25 % in electricity demand and circa 20 % in natural gas demand in 2020. These estimates imply that electricity growth from 2009 to 2020 ranges from turning US electricity demand growth negative, to reducing it to a growth rate of circa 0.3 %/year (compared to circa 1 % baseline growth)
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Kernel density estimation of a multidimensional efficiency profile
Poluektov, Anton
2014-01-01
Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Concurrent estimation of efficiency, effectiveness and returns to scale
Khodakarami, Mohsen; Shabani, Amir; Farzipoor Saen, Reza
2016-04-01
In recent years, data envelopment analysis (DEA) has been widely used to assess both efficiency and effectiveness. Accurate measurement of overall performance is a product of concurrent consideration of these measures. There are a couple of well-known methods to assess both efficiency and effectiveness. However, some issues can be found in previous methods. The issues include non-linearity problem, paradoxical improvement solutions, efficiency and effectiveness evaluation in two independent environments: dividing an operating unit into two autonomous departments for performance evaluation and problems associated with determining economies of scale. To overcome these issues, this paper aims to develop a series of linear DEA methods to estimate efficiency, effectiveness, and returns to scale of decision-making units (DMUs), simultaneously. This paper considers the departments of a DMU as a united entity to recommend consistent improvements. We first present a model under constant returns to scale (CRS) assumption, and examine its relationship with one of existing network DEA model. We then extend model under variable returns to scale (VRS) condition, and again its relationship with one of existing network DEA models is discussed. Next, we introduce a new integrated two-stage additive model. Finally, an in-depth analysis of returns to scale is provided. A case study demonstrates applicability of the proposed models.
Efficient distance-including integral screening in linear-scaling Møller-Plesset perturbation theory
Maurer, Simon A.; Lambrecht, Daniel S.; Kussmann, Jörg; Ochsenfeld, Christian
2013-01-01
Efficient estimates for the preselection of two-electron integrals in atomic-orbital based Møller-Plesset perturbation theory (AO-MP2) theory are presented, which allow for evaluating the AO-MP2 energy with computational effort that scales linear with molecular size for systems with a significant HOMO-LUMO gap. The estimates are based on our recently introduced QQR approach [S. A. Maurer, D. S. Lambrecht, D. Flaig, and C. Ochsenfeld, J. Chem. Phys. 136, 144107 (2012), 10.1063/1.3693908], which exploits the asympotic decay of the integral values with increasing bra-ket separation as deduced from the multipole expansion and combines this decay behavior with the common Schwarz bound to a tight and simple estimate. We demonstrate on a diverse selection of benchmark systems that our AO-MP2 method in combination with the QQR-type estimates produces reliable results for systems with both localized and delocalized electronic structure, while in the latter case the screening essentially reverts to the common Schwarz screening. For systems with localized electronic structure, our AO-MP2 method shows an early onset of linear scaling as demonstrated on DNA systems. The favorable scaling behavior allows to compute systems with more than 1000 atoms and 10 000 basis functions on a single core that are clearly not accessible with conventional MP2 methods. Furthermore, our AO-MP2 method is particularly suited for parallelization and we present benchmark calculations on a protein-DNA repair complex comprising 2025 atoms and 20 371 basis functions.
Efficient estimation of orthophoto images using visibility restriction
Miura, Hiroyuki; Chikatsu, Hirofumi
2015-05-01
The orthophoto image which is generated using an aerial photo is used in river management, road design and the various fields since the orthophoto has ability to visualize land use with position information. However, the image distortion often occurs in the ortho rectification process. This image distortion is used to estimate manually by the evaluation person with great time. The image distortion should be automatically estimated from the view point of efficiency of the process. With this motive, formed angle V between view vector at exposure point and normal vector at center point of a patch area was focused in this paper. In order to evaluate the relation between image distortion and formed angle V, DMC image which were acquired 2000m height were used and formed angle V for 10m×10m patch was adopted for computing visibility restriction. It was confirmed that image distortion occurred for the patch which show rather than 69 degree of the formed angle V. Therefore, it is concluded that efficient orthophoto visibility restriction is able to perform using the formed angle V as visibility restriction in this paper.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
Jakobsen, Nina Munkholt; Sørensen, Michael
estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...... simulation study comparing an efficient and a non-efficient estimating function....
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Estimation of Smooth Distributions From Coarsely Grouped Data
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-01-01
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information...... in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most...... applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm...
An efficient algebraic approach to observability analysis in state estimation
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
Mossel, Elchanan
2010-01-01
We propose a simple and efficient Bayesian model of iterative learning on social networks. This model is efficient in two senses: the process both results in an optimal belief, and can be carried out with modest computational resources for large networks. This result extends Condorcet's Jury Theorem to general social networks, while preserving rationality and computational feasibility. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement of S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard dev...
The estimation of energy efficiency for hybrid refrigeration system
Highlights: ► We present the experimental setup and the model of the hybrid cooling system. ► We examine impact of the operating parameters of the hybrid cooling system on the energy efficiency indicators. ► A comparison of the final and the primary energy use for a combination of the cooling systems is carried out. ► We explain the relationship between the COP and PER values for the analysed cooling systems. -- Abstract: The concept of the air blast-cryogenic freezing method (ABCF) is based on an innovative hybrid refrigeration system with one common cooling space. The hybrid cooling system consists of a vapor compression refrigeration system and a cryogenic refrigeration system. The prototype experimental setup for this method on the laboratory scale is discussed. The application of the results of experimental investigations and the theoretical–empirical model makes it possible to calculate the cooling capacity as well as the final and primary energy use in the hybrid system. The energetic analysis has been carried out for the operating modes of the refrigerating systems for the required temperatures inside the cooling chamber of −5 °C, −10 °C and −15 °C. For the estimation of the energy efficiency the coefficient of performance COP and the primary energy ratio PER for the hybrid refrigeration system are proposed. A comparison of these coefficients for the vapor compression refrigeration and the cryogenic refrigeration system has also been presented.
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
Efficient mental workload estimation using task-independent EEG features
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Energy efficiency upgrades have been gaining widespread attention across global channels as a cost-effective approach to addressing energy challenges. The cost-effectiveness of these projects is generally predicted using engineering estimates pre-implementation, often with little ex post analysis of project success. In this paper, for a suite of energy efficiency projects, we directly compare ex ante engineering estimates of energy savings to ex post econometric estimates that use 15-min interval, building-level energy consumption data. In contrast to most prior literature, our econometric results confirm the engineering estimates, even suggesting the engineering estimates were too modest. Further, we find heterogeneous efficiency impacts by time of day, suggesting select efficiency projects can be useful in reducing peak load. - Highlights: • Regression discontinuity used to estimate energy savings from efficiency projects. • Ex post econometric estimates validate ex ante engineering estimates of energy savings. • Select efficiency projects shown to reduce peak load
Statistically Efficient Methods for Pitch and DOA Estimation
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state...
Estimation of the Asian telecommunication technical efficiencies with panel data
YANG Yu-yong; JIA Huai-jing
2007-01-01
This article used panel data and the Stochastic frontier analysis (SFA) model to analyze and compare the technical efficiencies of the telecommunication industry in 28 Asian countries from 1994 to 2003. In conclusion, the technical efficiencies of the Asian countries were found to steadily increase in the past decade. The high-income countries have the highest technical efficiency; however, income is not the only factor that affects the technical efficiency.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Efficient Estimation of Nonlinear Finite Population Parameters Using Nonparametrics
Goga, Camelia; Ruiz-Gazen, Anne
2012-01-01
Currently, the high-precision estimation of nonlinear parameters such as Gini indices, low-income proportions or other measures of inequality is particularly crucial. In the present paper, we propose a general class of estimators for such parameters that take into account univariate auxiliary information assumed to be known for every unit in the population. Through a nonparametric model-assisted approach, we construct a unique system of survey weights that can be used to estimate any nonlinea...
An Efficient Bandwidth Estimation Schemes used in Wireless Mesh Networks
A.Sandeep Kumar ,Second Author
2012-01-01
wireless mesh networks (WMNs) has been widely used for the new generation wireless network. The capability of self-organization in WMNs reduces the complexity of wireless network deployment and maintenance. So, the perfect estimation of the bandwidth available of the mesh nodes is the required to admission control mechanism which provides QOs confirmation in wireless mesh networks. The bandwidth estimation of schemes do not give clear output. Here we are proposing bandwidth scheme estimation ...
Efficient estimation of breeding values from dense genomic data
Genomic, phenotypic, and pedigree data can be combined to produce estimated breeding values (EBV) with higher reliability. If coefficient matrix Z includes genotypes for many loci and marker effects (u) are normally distributed with equal variance at each, estimation of u by mixed model equations or...
Efficient and Accurate Robustness Estimation for Large Complex Networks
Wandelt, Sebastian
2016-01-01
Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.
Indexes of estimation of efficiency of the use of intellectual resources of industrial enterprises
Audzeichyk Olga
2015-12-01
Full Text Available The article researches the theoretical and practical aspects of estimation of intellectual resources of industrial enterprises and proposes the method of estimation of efficiency of the use of intellectual resources.
Efficient estimation of burst-mode LDA power spectra
Velte, Clara Marika; George, William K
2010-01-01
The estimation of power spectra from LDA data provides signal processing challenges for fluid dynamicists for several reasons. Acquisition is dictated by randomly arriving particles which cause the signal to be highly intermittent. This both creates self-noise and causes the measured velocities to...... increased requirements for good statistical convergence due to the random sampling of the data. In the present work, the theory for estimating burst-mode LDA spectra using residence time weighting is discussed and a practical estimator is derived and applied. A brief discussion on the self-noise in spectra...... and correlations is included, as well as one regarding the statistical convergence of the spectral estimator for random sampling. Further, the basic representation of the burst-mode LDA signal has been revisited due to observations in recent years of particles not following the flow (e.g., particle...
Determination of feed efficiency requires estimates of intake and digestibility of the diet, but they are difficult to measure on pasture. The objective of this research was to determine if plants cuticular alkanes were suitable as markers to estimate intake and diet digestibility of grazing cows wi...
Energy-Efficient Channel Estimation in MIMO Systems
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Transverse correlation: An efficient transverse flow estimator - initial results
Holfort, Iben Kraglund; Henze, Lasse; Kortbek, Jacob; Jensen, Jørgen Arendt
vascular hemodynamics, the flow angle cannot easily be found as the angle is temporally and spatially variant. Additionally the precision of traditional methods is severely lowered for high flow angles, and they breakdown for a purely transverse flow. To overcome these problems we propose a new method for...... estimating the transverse velocity component. The method measures the transverse velocity component by estimating the transit time of the blood between two parallel lines beamformed in receive. The method has been investigated using simulations performed with Field II. Using 15 emissions per estimate, a...... at 45 degrees. The method performs stable down to a signal-to-noise ratio of 0 dB, where a standard deviation of 5.5% and a bias of 1.2% is achieved....
Computationally Efficient and Noise Robust DOA and Pitch Estimation
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
(microphone) array signal processing, the periodic signals are estimated from spatio-temporal samples regarding to the direction of arrival (DOA) of the signal of interest. In this paper, we consider the problem of pitch and DOA estimation of quasi-periodic audio signals. In real life scenarios, recorded......Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...
Efficient probabilistic planar robot motion estimation given pairs of images
O. Booij; B. Kröse; Z. Zivkovic
2010-01-01
Estimating the relative pose between two camera positions given image point correspondences is a vital task in most view based SLAM and robot navigation approaches. In order to improve the robustness to noise and false point correspondences it is common to incorporate the constraint that the robot m
Efficient estimates of cochlear hearing loss parameters in individual listeners
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According to...
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Efficient Topology Estimation for Large Scale Optical Mapping
Elibol, Armagan; Garcia, Rafael
2013-01-01
Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Estimation of Nitrogen Fertilizer Use Efficiency in Dryland Agroecosystem
LI Shi-qing; LI Sheng-xiu
2001-01-01
A field trial was carried out to study the nitrogen fertilizer recovery by four crops in succession in manurial loess soil in Yangling. The results showed that the nitrogen fertilizer not only had the significant effects on the first crop , but also had longer residual effects, even on the fourth crop. The average apparent nitrogen fertilizer recovery by the first crop was 31.7%, and the accumulative nitrogen recovery by the 4 crops was high as 62.3%, and the latter was double as the former. It is quite clear that the nitrogen fertilizer recovery by the first crop was not reliable for estimating the nitrogen fertilizer unless the residual effect of nitrogen fertilizer was included.
Ionization efficiency estimations for the SPES surface ion source
Manzolaro, M.; Andrighetto, A.; Meneghetti, G.; Rossignoli, M.; Corradetti, S.; Biasetto, L.; Scarpa, D.; Monetti, A.; Carturan, S.; Maggioni, G.
2013-12-01
Ion sources play a crucial role in ISOL (Isotope Separation On Line) facilities determining, with the target production system, the ion beam types available for experiments. In the framework of the SPES (Selective Production of Exotic Species) INFN (Istituto Nazionale di Fisica Nucleare) project, a preliminary study of the alkali metal isotopes ionization process was performed, by means of a surface ion source prototype. In particular, taking into consideration the specific SPES in-target isotope production, Cs and Rb ion beams were produced, using a dedicated test bench at LNL (Laboratori Nazionali di Legnaro). In this work the ionization efficiency test results for the SPES Ta surface ion source prototype are presented and discussed.
SONG Bo-wei; GUAN Yun-feng; ZHANG Wen-jun
2005-01-01
This paper deals with channel estimation for orthogonal frequency-division multiplexing (OFDM) systems with transmit diversity. Space time coded OFDM systems, which can provide transmit diversity, require perfect channel estimation to improve communication quality. In actual OFDM systems, training sequences are usually used for channel estimation. The authors propose a training based channel estimation strategy suitable for space time coded OFDM systems. This novel strategy provides enhanced performance, high spectrum efficiency and relatively low computation complexity.
Efficient human pose estimation from single depth images.
Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew
2013-12-01
We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424
Estimate of ecological efficiency for thermal power plants in Brazil
Global warming and the consequent climatic changes that will come as a result of the increase of CO2 concentration in the atmosphere have increased the world's concern regarding reduction of these emissions, mainly in developed countries that pollute the most. Electricity generation in thermal power plants, as well as other industrial activities, such as chemical and petrochemical ones, entail the emission of pollutants that are harmful to humans, animals and plants. The emissions of carbon oxides (CO and CO2) and nitrous oxide (N2O) are directly related to the greenhouse effect. The negative effects of sulfur oxides (SO2 and SO3 named SOx) and nitrogen oxides (NOx) are their contribution to the formation of acid rain and their impacts on human health and on the biota in general. This study intends to evaluate the environmental impacts of the atmospheric pollution resulting from the burning of fossil fuels. This study considers the emissions of CO2, SOx, NOx and PM in an integral way, and they are compared to the international air quality standards that are in force using a parameter called ecological efficiency (ε)
Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models
Zhangong ZHOU; Rong JIANG; Weimin QIAN
2011-01-01
The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements...
Singbo, Alphonse G.; Lansink, Alfons Oude; Emvalomatis, Grigorios
2015-01-01
This paper analyzes technical efficiency and the value of the marginal product of productive inputs vis-a-vis pesticide use to measure allocative efficiency of pesticide use along productive inputs. We employ the data envelopment analysis framework and marginal cost techniques to estimate technic
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
Efficiency assessment of using satellite data for crop area estimation in Ukraine
Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga
2014-06-01
The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.
Marlin, Benjamin
2012-01-01
Standard maximum likelihood estimation cannot be applied to discrete energy-based models in the general case because the computation of exact model probabilities is intractable. Recent research has seen the proposal of several new estimators designed specifically to overcome this intractability, but virtually nothing is known about their theoretical properties. In this paper, we present a generalized estimator that unifies many of the classical and recently proposed estimators. We use results from the standard asymptotic theory for M-estimators to derive a generic expression for the asymptotic covariance matrix of our generalized estimator. We apply these results to study the relative statistical efficiency of classical pseudolikelihood and the recently-proposed ratio matching estimator.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
AkiraOgawa
1999-01-01
A cyclone dust collector is applied in many industries.Especially the axial flow cyclone is the most simple construction and if keeps high reliability for maintenance.On the other hand,the collection efficiency of the cyclone depends not only on the inlet gas velocity but also on the feed particle concentration.The collection efficiency increases with increasing feed particle concentration.However until now the problem of how to estimate the collection efficiency depended on the feed particle concentration is remained except the investigation by Muschelknautz & Brunner[6],Therefore in this paper one of the estimate method for the collection efficiency of the axial flow cyclones is proposed .The application to the geometrically similar type of cyclone of the body diameters D1=30,50,69and 99mm showed in good agreement with the experimental results of the collection efficiencies which were described in detail in the paper by ogawa & Sugiyama[8].
Estimating the net implicit price of energy efficient building codes on U.S. households
Requiring energy efficiency building codes raises housing prices (or the monthly rental equivalent), but theoretically this effect might be fully offset by reductions in household energy expenditures. Whether there is a full compensating differential or how much households are paying implicitly is an empirical question. This study estimates the net implicit price of energy efficient buildings codes, IECC 2003 through IECC 2006, for American households. Using sample data from the American Community Survey 2007, a heteroskedastic seemingly unrelated estimation approach is used to estimate hedonic price (house rent) and energy expenditure models. The value of energy efficiency building codes is capitalized into housing rents, which are estimated to increase by 23.25 percent with the codes. However, the codes provide households a compensating differential of about a 6.47 percent reduction (about $7.71) in monthly energy expenditure. Results indicate that the mean household net implicit price for these codes is about $140.87 per month in 2006 dollars ($163.19 in 2013 dollars). However, this estimated price is shown to vary significantly by region, energy type and the rent gradient. - Highlights: • House rent increases by 23.25 percent with the energy efficiency codes. • Compensating differential of the codes is 6.47 percent. • Net implicit price of effect of energy efficiency building codes is about $140.87
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
Takahashi, Fumitake; Kida, Akiko; Shimaoka, Takayuki
2010-10-15
Although representative removal efficiencies of gaseous mercury for air pollution control devices (APCDs) are important to prepare more reliable atmospheric emission inventories of mercury, they have been still uncertain because they depend sensitively on many factors like the type of APCDs, gas temperature, and mercury speciation. In this study, representative removal efficiencies of gaseous mercury for several types of APCDs of municipal solid waste incineration (MSWI) were offered using a statistical method. 534 data of mercury removal efficiencies for APCDs used in MSWI were collected. APCDs were categorized as fixed-bed absorber (FA), wet scrubber (WS), electrostatic precipitator (ESP), and fabric filter (FF), and their hybrid systems. Data series of all APCD types had Gaussian log-normality. The average removal efficiency with a 95% confidence interval for each APCD was estimated. The FA, WS, and FF with carbon and/or dry sorbent injection systems had 75% to 82% average removal efficiencies. On the other hand, the ESP with/without dry sorbent injection had lower removal efficiencies of up to 22%. The type of dry sorbent injection in the FF system, dry or semi-dry, did not make more than 1% difference to the removal efficiency. The injection of activated carbon and carbon-containing fly ash in the FF system made less than 3% difference. Estimation errors of removal efficiency were especially high for the ESP. The national average of removal efficiency of APCDs in Japanese MSWI plants was estimated on the basis of incineration capacity. Owing to the replacement of old APCDs for dioxin control, the national average removal efficiency increased from 34.5% in 1991 to 92.5% in 2003. This resulted in an additional reduction of about 0.86Mg emission in 2003. Further study using the methodology in this study to other important emission sources like coal-fired power plants will contribute to better emission inventories. PMID:20713298
Estimating Efficiency Offset between Two Groups of Decision-Making Units
Macek, Karel
Prague: Institute of Information Theory and Automation , 2013 - (Guy, T.; Kárný, M.) ISBN 978-80-903834-8-7. [The 3rd International Workshop on Scalable Decision Making: Uncertainty, Imperfection, Deliberation held in conjunction with ECML/PKDD 2013. Prague (CZ), 23.09.2013-23.09.2013] R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Data Envelopment Analysis * Local Regression * Efficiency Comparison * Interval Estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/AS/macek-estimating efficiency offset between two groups of decision-making units.pdf
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
An efficient framework for estimation of muscle fiber orientation using ultrasonography
Ling, Shan; CHEN Bin; Zhou, Yongjin; Yang, Wan-Zhang; Zhao, Yu-Qian; Wang, Lei; Zheng, Yong-Ping
2013-01-01
Background Muscle fiber orientation (MFO) is an important parameter related to musculoskeletal functions. The traditional manual method for MFO estimation in sonograms was labor-intensive. The automatic methods proposed in recent years also involved voting procedures which were computationally expensive. Methods In this paper, we proposed a new framework to efficiently estimate MFO in sonograms. We firstly employed Multi-scale Vessel Enhancement Filtering (MVEF) to enhance fascicles in the so...
Sobchak Andrii
2016-02-01
Full Text Available The concept of hyperstability of the cybernetic system is considered in an appendix to the task of estimation of efficiency of virtual productive enterprise functioning. The basic factors, influencing on efficiency of functioning of such enterprise are determined. The article offers the methodology of synthesis of static structure of the system of support of making decision by managers of virtual enterprise, in particular procedure of determination of numerical and high-quality strength of equipment, producible on a virtual enterprise.
B. Bayram
2006-01-01
Full Text Available Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficient as smaller cows
Estimating welfare changes from efficient pricing in public bus transit in India
Deb, Kaushik; Filippini, Massimo
2011-01-01
Three different and feasible pricing strategies for public bus transport in India are developed in a partial equilibrium framework with the objective of improving economic efficiency and ensuring revenue adequacy, namely average cost pricing, marginal cost pricing, and two-part tariffs. These are assessed not only in terms of gains in economic efficiency, but also in changes in travel demand and consumer surplus. The estimated partial equilibrium price is higher in all three pricing reg...
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
The output estimation of a DMU to preserve and improvement of the relative efficiency
Masoud Sanei
2013-10-01
Full Text Available In this paper, we consider the inverse BCC model is used to estimate output levels of the Decision Making Units (DMUs, when the input levels are changed and maintain the efficiency index for all DMUs. Since the inverse BCC problem is in the form of a multi objective nonlinear programming model (MONLP, which is not easy to solve. Therefore, we propose a linear programming model, which gives a Pareto-efficient solution to the inverse BCC problem. So far, we propose a model for improvement of the current efficiency value for considered DMU. Numerical examples are, also, used to illustrate the proposed approaches.
CHUNG Warn-ill; CHOI Jun-ho; BAE Hae-young
2004-01-01
Many commercial database systems maintain histograms to summarize the contents of relations and permit the efficient estimation of query result sizes and the access plan cost. In spatial database systems, most spatial query predicates are consisted of topological relationships between spatial objects, and it is very important to estimate the selectivity of those predicates for spatial query optimizer. In this paper, we propose a selectivity estimation scheme for spatial topological predicates based on the multidimensional histogram and the transformation scheme. Proposed scheme applies twopartition strategy on transformed object space to generate spatial histogram and estimates the selectivity of topological predicates based on the topological characteristics of the transformed space. Proposed scheme provides a way for estimating the selectivity without too much memory space usage and additional I/Os in most spatial query optimizers.
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of...
Zhan, Xianyuan; Qian, Xinwu; Ukkusuri, Satish V.
2015-01-01
The era of big data Advance in sensing technologies Development of large scale pervasive computing infrastructure Big data and transportation engineering Reconsider traditional research problems Make infeasible problems feasible In this work Using large scale taxi data from NYC Taxi Ridership analysis Link travel time estimation Taxi system efficiency
EFFICIENCY ESTIMATION OF THERMAL POWER PLANT WITHOUT DIVIDING FUEL CONSUMPTION IN PRODUCT TYPES
A. E. Piir; V. B. Kuntysh
2016-01-01
Combined Thermal Power Plant unit is considered as an exergy generator. Exergy is supplied to consumers by streams of various power carriers. It allows to exclude division of the equipment and fuel consumption in product types and to propose extremely simple methods for estimation of the unit efficiency, calculation of power rate supplied from Thermal Power Plant bus bars and collectors.
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
CLASSIFICATION AND ESTIMATION OF THE EFFICIENCY OF SYSTEMS FOR UNINTERRUPTED ELECTROSUPPLY
Vinnikov A. V.
2015-03-01
Full Text Available In the article we present generalized block diagrams of stationary and transport systems of uninterrupted electrosupply, as well as their maintenance and the basic operating modes providing uninterrupted electrosupply of crucial consumers. Classification of systems of uninterrupted electrosupply is resulted. The basic classification attributes of systems of uninterrupted electrosupply are their assignment for stationary or transport consumers of the electric power, types of used basic, reserve and emergency sources and converters of the electric power. Besides systems of uninterrupted electrosupply can be classified under circuits of connection to consumers of the electric power, their division on a sort of a current (constant, variable, high-frequency, breaks in electrosupply, to type of the switching equipment and so on. For the estimation of the efficiency of systems of uninterrupted electrosupply it is offered to use the following criteria of efficiency: Power and weight-dimension parameters, parameters of reliability, quality of the electric power and cost. Analytical expressions for calculation of parameters of the estimation of efficiency of systems of uninterrupted electrosupply are resulted. The classification of systems of uninterrupted electrosupply suggested in article and modes of their work, and also the basic criteria of an estimation of efficiency will allow raising efficiency of pre-design works on creation of systems with improved customer characteristics with use of modern element base
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Investing more in renewable energy sources and using this in a rational and efficient way is vital for the sustainable growth of world. Energy efficiency (EE) will play an increasingly important role in future generations. The aim of this work is to estimate how much the PNEf (National Plan for Energy Efficiency) launched by the Brazilian government in 2011 will save over the next 5 years by avoiding the construction of additional power plants, as well as the amount of the CO2 emission. The marginal operating cost is computed for medium term planning of the dispatching of power plants in the hydro-thermal system using Stochastic Dynamic Dual Programming, after incorporating stochastic energy efficiencies into the demand for electricity. We demonstrate that even for a modest improvement in energy efficiency (<1% per year), the savings over the next 5 years range from R$ 237 million in the conservative scenario to R$ 268 million in the optimistic scenario. By comparison the new Belo Monte hydro-electric plant will cost R$ 26 billion to be repaid over a 30 year period (i.e. R$ 867 million in 5 years). So in Brazil EE policies are preferable to building a new power plant. - Highlights: • It is preferable to invest in energy efficiency than to construct a big power plant. • An increase of energy efficiency policies would reduce the operating cost in Brazil. • We have a great reduction of CO2eq emissions by means of energy efficiency policies
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Technical Efficiency of Shrimp Farming in Andhra Pradesh: Estimation and Implications
I. Sivaraman
2015-04-01
Full Text Available Shrimp farming is a key subsector of Indian aquaculture which has seen a remarkable growth in the past decades and has a tremendous potential in future. The present study analyzes the technical efficiency of the shrimp famers of East Godavari district of Andhra Pradesh using the Stochastic Production Frontier Function with the technical inefficiency effects. The estimates mean technical efficiency of the farmers was 93.06 % which means the farmers operate at 6.94 % below the production frontier production. Age, education, experience of the farmers and their membership status in farmers associations and societies were found to have a significant effect on the technical efficiency. The variation in the technical efficiency also confirms the differences in the extent of adoption of the shrimp farming technology among the farmers. Proper technical training opportunities could facilitate the farmers to adopt the improved technologies to increase their farm productivity.
A novel method for coil efficiency estimation: Validation with a 13C birdcage
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina;
2012-01-01
Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method by...... measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench and...
In many recent years, the gamma spectrometry using the high purity germanium (HPGe) detector have come into widespread use to determine the activity of radioactive samples. However, the decrease in detector efficiency remarkably influences on the result of measured gamma spectra. In this work, we estimated the decrease in efficiency of the GC1518 HPGe detector made in Canberra Industries, Inc. and located at the Center for HCMC Nuclear Techniques. It was found that the detector efficiency reduces to 8% within 6 years from October 1999 to August 2005. The decrease in efficiency can be explained by increase in the thickness of an inactive germanium layer based on using the Monte Carlo simulation. (author)
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
Estimating the Effect of Helium and Nitrogen Mixing on Deposition Efficiency in Cold Spray
Ozdemir, Ozan C.; Widener, Christian A.; Helfritch, Dennis; Delfanian, Fereidoon
2016-04-01
Cold spray is a developing technology that is increasingly finding applications for coating of similar and dissimilar metals, repairing geometric tolerance defects to extend expensive part life and additive manufacturing across a variety of industries. Expensive helium is used to accelerate the particles to higher velocities in order to achieve the highest deposit strengths and to spray hard-to-deposit materials. Minimal information is available in the literature studying the effects of He-N2 mixing on coating deposition efficiency, and how He can potentially be conserved by gas mixing. In this study, a one-dimensional simulation method is presented for estimating the deposition efficiency of aluminum coatings, where He-N2 mixture ratios are varied. The simulation estimations are experimentally validated through velocity measurements and single particle impact tests for Al6061.
Petushek, Erich J.; Cokely, Edward T.; Ward, Paul; Durocher, John; Wallace, Sean; Myer, Gregory D
2015-01-01
Simple observational assessment of movement quality (e.g., drop vertical jump biomechanics) is an efficient and low cost method for anterior cruciate ligament (ACL) injury screening and prevention. A recently developed test (see www.ACL-IQ.org) has revealed substantial cross-professional/group differences in visual ACL injury risk estimation skill. Specifically, parents, sport coaches, and to some degree sports medicine physicians, would likely benefit from training or the use of decision sup...
Estimating the efficiency of sustainable development by South African mining companies
Oberholzer, Merwe; Prinsloo, Thomas Frederik
2011-01-01
The purpose of the study was to develop a model, using data envelopment analysis (DEA), in order to estimate the relative efficiency of nine South African listed mining companies in their efforts to convert environmental impact into economic and social gains for shareholders and other stakeholders. The environmental impact factors were used as input variables, that is, greenhouse gas emissions, water usage and energy usage, and the gains for shareholders and other stakeholders were used as ou...
EFFICIENT PU MODE DECISION AND MOTION ESTIMATION FOR H.264/AVC TO HEVC TRANSCODER
Zong-Yi Chen; Jiunn-Tsair Fang; Tsai-Ling Liao; Pao-Chi Chang1
2014-01-01
H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC), had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU) decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs) extracted from H.264/AVC can be ...
Meyers, S.; Marnay, C.; Schumacher, K.; Sathaye, J.
2000-01-01
This paper describes a standardized method for establishing a multi-project baseline for a power system. The method provides an approximation of the generating sources that are expected to operate on the margin in the future for a given electricity system. It is most suitable for small-scale electricity generation and electricity efficiency improvement projects. It allows estimation of one or more carbon emissions factors that represent the emissions avoided by projects, striking a bala...
Feklistova Inessa
2016-02-01
Full Text Available The article presents methodical approach to the estimation of strategic man-agement efficiency of enterprises of the region with the use of cluster analysis, realized by means of the specially worked out application package. The necessity of its application in the analytical work of economic services of the region enterprises has been proved. It will allow to improve the quality of monitoring, and scientifically substantiate strategic administrative decisions
Ambient vibrations efficiency for building dynamic characteristics estimate and seismic evaluation.
Dunand, François
2005-01-01
Ambient vibrations are mechanical low amplitude vibrations generated by human and natural activities. By forcing into vibration engineering structures, these vibrations can be used to estimate the structural dynamic characteristics.The goal of this study is to compare building dynamic characteristics derived from ambient vibrations to those derived from more energetic solicitations (e.g. earthquake). This study validates the efficiency of this method and shows that ambient vibrations results ...
Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space
Neelakantan, Arvind; Shankar, Jeevan; Passos, Alexandre; McCallum, Andrew
2015-01-01
There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination a...
The amount of photosynthetically active radiation (PAR) absorbed by green vegetation is an important determinant of photosynthesis and growth. Methods for the estimation of fractional absorption of PAR (iffPAR) for areas greater than 1 km2 using satellite data are discussed, and are applied to sites in the Sahel that have a sparse herb layer and tree cover of less than 5%. Using harvest measurements of seasonal net production, net production efficiencies are calculated. Variation in estimates of seasonal PAR absorption (APAR) caused by the atmospheric correction method and relationship between surface reflectances and iffPAR is considered. The use of maximum value composites of satellite NDVI to reduce the effect of the atmosphere is shown to produce inaccurate APAR estimates. In this data set, however, atmospheric correction using average optical depths was found to give good approximations of the fully corrected data. A simulation of canopy radiative transfer using the SAIL model was used to derive a relationship between canopy NDVI and iffPAR. Seasonal APAR estimates assuming a 1:1 relationship between iffPAR and NDVI overestimated the SAIL modeled results by up to 260%. The use of a modified 1:1 relationship, where iffPAR was assumed to be linearly related to NDVI scaled between minimum (soil) and maximum (infinite canopy) values, underestimated the SAIL modeled results by up to 35%. Estimated net production efficiencies (ϵn, dry matter per unit APAR) fell in the range 0.12–1.61 g MJ−1 for above ground production, and in the range 0.16–1.88 g MJ−1 for total production. Sites with lower rainfall had reduced efficiencies, probably caused by physiological constraints on photosynthesis during dry conditions. (author)
Technical and Scale Efficiency in Spanish Urban Transport: Estimating with Data Envelopment Analysis
I. M. García Sánchez
2009-01-01
Full Text Available The paper undertakes a comparative efficiency analysis of public bus transport in Spain using Data Envelopment Analysis. A procedure for efficiency evaluation was established with a view to estimating its technical and scale efficiency. Principal components analysis allowed us to reduce a large number of potential measures of supply- and demand-side and quality outputs in three statistical factors assumed in the analysis of the service. A statistical analysis (Tobit regression shows that efficiency levels are negative in relation to the population density and peak-to-base ratio. Nevertheless, efficiency levels are not related to the form of ownership (public versus private. The results obtained for Spanish public transport show that the average pure technical and scale efficiencies are situated at 94.91 and 52.02%, respectively. The excess of resources is around 6%, and the increase in accessibility of the service, one of the principal components summarizing the large number of output measures, is extremely important as a quality parameter in its performance.
Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision
Xiang Wu
2013-01-01
Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented. (paper)
Schildbach, Christian; Ong, Duu Sheng; Hartnagel, Hans; Schmidt, Lorenz-Peter
2016-06-01
The ballistic electron wave swing device has previously been presented as a possible candidate for a simple power conversion technique to the THz -domain. This paper gives a simulative estimation of the power conversion efficiency. The harmonic balance simulations use an equivalent circuit model, which is also derived in this work from a mechanical model. To verify the validity of the circuit model, current waveforms are compared to Monte Carlo simulations of identical setups. Model parameters are given for a wide range of device configurations. The device configuration exhibiting the most conforming waveform is used further for determining the best conversion efficiency. The corresponding simulation setup is described. Simulation results implying a conversion efficiency of about 22% are presented.
Estimation of coupling efficiency of optical fiber by far-field method
Kataoka, Keiji
2010-09-01
Coupling efficiency to a single-mode optical fiber can be estimated with the field amplitudes at far-field of an incident beam and optical fiber mode. We call it the calculation by far-field method (FFM) in this paper. The coupling efficiency by FFM is formulated including effects of optical aberrations, vignetting of the incident beam, and misalignments of the optical fiber such as defocus, lateral displacements, and angle deviation in arrangement of the fiber. As the results, it is shown the coupling efficiency is proportional to the central intensity of the focused spot, i.e., Strehl intensity of a virtual beam determined by the incident beam and mode of the optical fiber. Using the FFM, a typical optics in which a laser beam is coupled to an optical fiber with a lens of finite numerical aperture (NA) is analyzed for several cases of amplitude distributions of the incident light.
The efficiency of different estimation methods of hydro-physical limits
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Wenhua Han
2014-06-01
Full Text Available In this paper, efficient managing particle swarm optimization (EMPSO for high dimension problem is proposed to estimate defect profile from magnetic flux leakage (MFL signal. In the proposed EMPSO, in order to strengthen exchange of information among particles, particle pair model was built. For more efficient searching when facing different landscapes of problems, velocity updating scheme including three velocity updating models was also proposed. In addition, for more chances to search optimum solution out, automatic particle selection for re-initialization was implemented. The optimization results of six benchmark functions show EMPSO performs well when optimizing 100-D problems. The defect simulation results demonstrate that the inversing technique based on EMPSO outperforms the one based on self-learning particle swarm optimizer (SLPSO, and the estimated profiles are still close to the desired profiles with the presence of low noise in MFL signal. The results estimated from real MFL signal by EMPSO-based inversing technique also indicate that the algorithm is capable of providing an accurate solution of the defect profile with real signal. Both the simulation results and experiment results show the computing time of the EMPSO-based inversing technique is reduced by 20%–30% than that of the SLPSO-based inversing technique.
An approach for efficient estimation of passive safety system functional reliability has been developed and applied to a simplified model of the passive residual heat transport system typical of sodium cooled fast reactors to demonstrate the reduction in computational time. The method is based on generating linear approximations to the best estimate computer code, using the technique of automatic reverse differentiation. This technique enables determination of linear approximation to the code in a few runs independent of the number of input variables for each response variable. The likely error due to linear approximation is reduced by augmented sampling through best estimate code in the neighborhood of the linear failure surface but in a sub domain where linear approximation error is relatively more. The efficiency of this new approach is compared with importance sampling MCS which uses the linear approximation near the failure region and with Direct Monte-Carlo Simulation. In the importance sampling MCS, variants employing random sampling with Box-Muller algorithm and Markov Chain algorithm are inter-compared. The significance of the results with respect to system reliability is also discussed.
Efficient PU Mode Decision and Motion Estimation for H.264/AVC to HEVC Transcoder
Zong-Yi Chen
2014-04-01
Full Text Available H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC, had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs extracted from H.264/AVC can be reused to predict the current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to 53% of the encoding time and maintains the rate-distortion (R-D performance for HEVC.
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Efficient estimation of decay parameters in acoustically coupled-spaces using slice sampling.
Jasa, Tomislav; Xiang, Ning
2009-09-01
Room-acoustic energy decay analysis of acoustically coupled-spaces within the Bayesian framework has proven valuable for architectural acoustics applications. This paper describes an efficient algorithm termed slice sampling Monte Carlo (SSMC) for room-acoustic decay parameter estimation within the Bayesian framework. This work combines the SSMC algorithm and a fast search algorithm in order to efficiently determine decay parameters, their uncertainties, and inter-relationships with a minimum amount of required user tuning and interaction. The large variations in the posterior probability density functions over multidimensional parameter spaces imply that an adaptive exploration algorithm such as SSMC can have advantages over the exiting importance sampling Monte Carlo and Metropolis-Hastings Markov Chain Monte Carlo algorithms. This paper discusses implementation of the SSMC algorithm, its initialization, and convergence using experimental data measured from acoustically coupled-spaces. PMID:19739741
Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method
Gheorghe Grigoras
2013-09-01
Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.
Study of grain alignment efficiency and a distance estimate for small globule CB4
We study the polarization efficiency (defined as the ratio of polarization to extinction) of stars in the background of the small, nearly spherical and isolated Bok globule CB4 to understand the grain alignment process. A decrease in polarization efficiency with an increase in visual extinction is noticed. This suggests that the observed polarization in lines of sight which intercept a Bok globule tends to show dominance of dust grains in the outer layers of the globule. This finding is consistent with the results obtained for other clouds in the past. We determined the distance to the cloud CB4 using near-infrared photometry (2MASS JHKS colors) of moderately obscured stars located at the periphery of the cloud. From the extinction-distance plot, the distance to this cloud is estimated to be (459 ± 85) pc. (paper)
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Robert Aidoo
2012-06-01
Full Text Available The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain.A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency analyses were performed using the field data. There was a long chain of greater than three channels through which yams moved from the producer to the final consumer. Yam marketing was found to be a profitable venture for all the key players in the yam marketing chain.Net marketing margin of about GH¢15.52 (US$9.13 was obtained when the farmer himself sold 100tubers of yams in the market rather than at the farm gate.The net marketing margin obtained by wholesalers was estimated at GH¢27.39 per 100tubers of yam sold, which was equivalent to about 61% of the gross margin obtained.Net marketing margin for retailers was estimated at GH¢15.37, representing 61% of the gross margin obtained.A net marketing margin of GH¢33.91 was obtained for every 100tubers of yam transported across Ghana’s borders by cross-border traders. Generally, the study found out that net marketing margin was highest for cross-border yam traders, followed by wholesalers. Yam marketing activities among retailers, wholesalers and cross-border traders were found to be highly efficient with efficiency ratios in excess of 100%. However, yam marketing among producer-sellers was found to be inefficient with efficiency ratio of about 86%.The study recommended policies and strategies to be adopted by central and local government authorities to address key constraints such as poor road network, limited financial resources, poor storage facilities and high cost of transportation that serve as
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs and...... by reduced outputs, we estimate hyperbolic distance functions that account for reduced technical efficiency both in terms of increased inputs and reduced outputs. We estimate these hyperbolic distance functions as “efficiency effect frontiers” with the Translog functional form and a dynamic...
Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.
The Use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm
Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH232PO4 solution was used to determine the active root-zone of oil palm by applying 32P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15N Ammonium Sulphate was used by adding them at the same amount of 16 g 15N plan-1. This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32P was able to distinguish several root zones: 1.5 m - 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone. (author)
Hardie, L C; Armentano, L E; Shaver, R D; VandeHaar, M J; Spurlock, D M; Yao, C; Bertics, S J; Contreras-Govea, F E; Weigel, K A
2015-04-01
Prior to genomic selection on a trait, a reference population needs to be established to link marker genotypes with phenotypes. For costly and difficult-to-measure traits, international collaboration and sharing of data between disciplines may be necessary. Our aim was to characterize the combining of data from nutrition studies carried out under similar climate and management conditions to estimate genetic parameters for feed efficiency. Furthermore, we postulated that data from the experimental cohorts within these studies can be used to estimate the net energy of lactation (NE(L)) densities of diets, which can provide estimates of energy intakes for use in the calculation of the feed efficiency metric, residual feed intake (RFI), and potentially reduce the effect of variation in energy density of diets. Individual feed intakes and corresponding production and body measurements were obtained from 13 Midwestern nutrition experiments. Two measures of RFI were considered, RFI(Mcal) and RFI(kg), which involved the regression of NE(L )intake (Mcal/d) or dry matter intake (DMI; kg/d) on 3 expenditures: milk energy, energy gained or lost in body weight change, and energy for maintenance. In total, 677 records from 600 lactating cows between 50 and 275 d in milk were used. Cows were divided into 46 cohorts based on dietary or nondietary treatments as dictated by the nutrition experiments. The realized NE(L) densities of the diets (Mcal/kg of DMI) were estimated for each cohort by totaling the average daily energy used in the 3 expenditures for cohort members and dividing by the cohort's total average daily DMI. The NE(L) intake for each cow was then calculated by multiplying her DMI by her cohort's realized energy density. Mean energy density was 1.58 Mcal/kg. Heritability estimates for RFI(kg), and RFI(Mcal) in a single-trait animal model did not differ at 0.04 for both measures. Information about realized energy density could be useful in standardizing intake data from
Efficient quantum state-estimation and feedback on trapped ions using unsharp measurement
Uys, Hermann; Burd, Shaun; Choudhary, Sujit; Goyal, Sandeep; Konrad, Thomas
2013-05-01
Parameter estimation and closed-loop feedback control is ubiquitous in every branch of classical science and engineering. Similar control of quantum systems is usually impossible due to two difficulties. Firstly, quantum phenomena are often short lived due to decoherence, and secondly, attempts to estimate the state of a quantum system through projective measurement, strongly disrupt the dynamics. One alternative is to use unsharp measurements, which are less invasive, but lead to less information gain about the system. A sequence of unsharp measurements, however, carried out in the presence of stronger dynamics, promise real-time state monitoring and control via feedback. Such measurements can be realised by periodically entangling an auxiliary quantum system with the target quantum system, and then carrying out projective measurements on the auxiliary system only. In this talk we discuss an efficient method of estimating both the state of a two-level system and the strength of its coupling to a drive field using unsharp measurement. We then model closed loop feedback control of the two-level dynamics, and explore the level of control over the parameter regime of the model. Finally, we summarize the prospects for implementing the scheme using trapped ions. This work was partially funded by the South African National Research Foundation.
Xu, Huihui; Jiang, Mingyan
2015-07-01
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Marigodov, V. K.
2011-01-01
It is shown a possibility of linguistic diagnosis for efficiency and noise immunity estimation of radio communication system. On a basis of direct experts questioning membership functions for one of the system parameters are built.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential sector in the EU-27 member states for the period 1996 to 2009. The estimates for the energy efficiency confirm that the EU residential sector indeed holds a relatively high potential for energy savings from reduced inefficiency. Therefore, despite the common objective to decrease ‘wasteful’ energy consumption, considerable variation in energy efficiency between the EU member states is established. Furthermore, an attempt is made to evaluate the impact of energy-efficiency measures undertaken in the EU residential sector by introducing an additional set of variables into the model and the results suggest that financial incentives and energy performance standards play an important role in promoting energy efficiency improvements, whereas informative measures do not have a significant impact. - Highlights: • The level of energy efficiency of the EU residential sector is estimated. • Considerable potential for energy savings from reduced inefficiency is established. • The impact of introduced energy-efficiency policy measures is also evaluated. • Financial incentives are found to promote energy efficiency improvements. • Energy performance standards also play an important role
Mohammed Abo-Zahhad; Sabah M. Ahmed; Ahmed Zakaria
2012-01-01
This paper presents an efficient electrocardiogram (ECG) signals compression technique based on QRS detection, estimation, and 2D DWT coefficients thresholding. Firstly, the original ECG signal is preprocessed by detecting QRS complex, then the difference between the preprocessed ECG signal and the estimated QRS-complex waveform is estimated. 2D approaches utilize the fact that ECG signals generally show redundancy between adjacent beats and between adjacent samples. The error signal is cut a...
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators
Flammia, Steven T; Liu, Yi-Kai; Eisert, Jens
2012-01-01
Intuitively, if a density operator has only a few non-zero eigenvalues, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We exhibit two complementary ways of making this intuition precise. On the one hand, we show that the sample complexity decreases with the rank of the density operator. In other words, fewer copies of the state need to be prepared in order to estimate a low-rank density matrix. On the other hand---and maybe more surprisingly---we prove that unknown low-rank states may be reconstructed using an incomplete set of measurement settings. The method does not require any a priori assumptions about the unknown state, uses only simple Pauli measurements, and can be efficiently and unconditionally certified. Our results extend earlier work on compressed tomography, building on ideas from compressed sensing and matrix completion. Instrumental to the improved analysis are new error bounds for compressed tomography, based on the ...
Nicolle V Sydney; Emygdio La Monteiro-Filho
2011-03-01
Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.
Mökkönen, Harri; Jónsson, Hannes
2016-01-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates i...
An Efficient Algorithm for Contact Angle Estimation in Molecular Dynamics Simulations
Sumith YD
2015-01-01
Full Text Available It is important to find contact angle for a liquid to understand its wetting properties, capillarity and surface interaction energy with a surface. The estimation of contact angle from Non Equilibrium Molecular Dynamics (NEMD, where we need to track the changes in contact angle over a period of time is challenging compared to the estimation from a single image from an experimental measurement. Often such molecular simulations involve finite number of molecules above some metallic or non-metallic substrates and coupled to a thermostat. The identification of profile of the droplet formed during this time will be difficult and computationally expensive to process as an image. In this paper a new algorithm is explained which can efficiently calculate time dependent contact angle from a NEMD simulation just by processing the molecular coordinates. The algorithm implements many simple yet accurate mathematical methods available, especially to remove the vapor molecules and noise data and thereby calculating the contact angle with more accuracy. To further demonstrate the capability of the algorithm a simulation study has been reported which compares the contact angle influence with different thermostats in the Molecular Dynamics (MD simulation of water over platinum surface.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Massimo Filippini; Hunt, Lester C.; Jelena Zoric
2013-01-01
The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential secto...
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner. PMID:27280456
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL)
An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation
Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai
2012-01-01
Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.
Maruta, Kazuki; Iwakuni, Tatsuhiko; Ohta, Atsushi; Arai, Takuto; Shirato, Yushi; Kurosaki, Satoshi; Iizuka, Masataka
2016-01-01
Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G). One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO) can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS) dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI) estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity. PMID:27399715
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
High yielding crop like maize is very important for countries like Pakistan, which is third cereal crop after wheat and rice. Maize accounts for 4.8 percent of the total cropped area and 4.82 percent of the value of agricultural production. It is grown all over the country but major areas are Sahiwal, Okara and Faisalabad. Chiniot is one of the distinct agroecological domains of central Punjab for the maize cultivation, that's why this district was selected for the study and the technical efficiency of hybrid maize farmers was estimated. The primary data of 120 farmers, 40 farmers from each of the three tehsils of Chiniot were collected in the year 2011. Causes of low yields for various farmers than the others, while using the same input bundle were estimated. The managerial factors causing the inefficiency of production were also measured. The average technical efficiency was estimated to be 91 percent, while it was found to be 94.8, 92.7 and 90.8 for large, medium and small farmers, respectively. Stochastic frontier production model was used to measure technical efficiency. Statistical software Frontier 4.1 was used to analyse the data to generate inferences because the estimates of efficiency were produced as a direct output from package. It was concluded that the efficiency can be enhanced by covering the inefficiency from the environmental variables, farmers personal characteristics and farming conditions. (author)
Brusset, Xavier
2014-01-01
We study the pricing problem between two firms when the manufacturer’s willingness to pay (wtp) for the supplier’s good is not known by the latter. We demonstrate that it is in the interest of the manufacturer to hide this information from the supplier. The precision of the information available to the supplier modifies the rent distribution. The risk of opportunistic behaviour entails a loss of efficiency in the supply chain. The model is extended to the case of a supplier submitting offers ...
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Marlin, Benjamin; De Freitas, Nando
2012-01-01
Standard maximum likelihood estimation cannot be applied to discrete energy-based models in the general case because the computation of exact model probabilities is intractable. Recent research has seen the proposal of several new estimators designed specifically to overcome this intractability, but virtually nothing is known about their theoretical properties. In this paper, we present a generalized estimator that unifies many of the classical and recently proposed estimators. We use results...
Efficient estimation of the robustness region of biological models with oscillatory behavior.
Mochamad Apri
Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.
Estimating Forward Pricing Function: How Efficient is Indian Stock Index Futures Market?
Prasad Bhattacharaya; Harminder Singh
2006-01-01
This paper uses Indian stock futures data to explore unbiased expectations and efficient market hypothesis. Having experienced voluminous transactions within a short time span after its establishment, the Indian stock futures market provides an unparalleled case for exploring these issues involving expectation and efficiency. Besides analyzing market efficiency between cash and futures prices using cointegration and error correction frameworks, the efficiency hypothesis is also investigated a...
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
Roliana Ibrahim
2012-09-01
Full Text Available Development effort is an undeniable part of the project management which considerably influences the success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project. Due to the special specifications, accurate estimation of effort in the software projects is a vital management activity that must be carefully done to avoid from the unforeseen results. However numerouseffort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and the attempts continue to improve the performance of estimation methods. Prior researches conducted in this area have focused on numerical and quantitative approaches and there are a few research works that investigate the root problems and issues behind the inaccurate effort estimation of software development effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in terms of effort estimation. The proposed framework includes various indicators which cover the critical issues in field of software development effort estimation. Since the capabilities and shortages of organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic approach in which the strengths and weaknesses of organizations in field of effort estimation are discovered
Horton, G.E.; Dubreuil, T.L.; Letcher, B.H.
2007-01-01
Our goal was to understand movement and its interaction with survival for populations of stream salmonids at long-term study sites in the northeastern United States by employing passive integrated transponder (PIT) tags and associated technology. Although our PIT tag antenna arrays spanned the stream channel (at most flows) and were continuously operated, we are aware that aspects of fish behavior, environmental characteristics, and electronic limitations influenced our ability to detect 100% of the emigration from our stream site. Therefore, we required antenna efficiency estimates to adjust observed emigration rates. We obtained such estimates by testing a full-scale physical model of our PIT tag antenna array in a laboratory setting. From the physical model, we developed a statistical model that we used to predict efficiency in the field. The factors most important for predicting efficiency were external radio frequency signal and tag type. For most sampling intervals, there was concordance between the predicted and observed efficiencies, which allowed us to estimate the true emigration rate for our field populations of tagged salmonids. One caveat is that the model's utility may depend on its ability to characterize external radio frequency signals accurately. Another important consideration is the trade-off between the volume of data necessary to model efficiency accurately and the difficulty of storing and manipulating large amounts of data.
Efficient and robust estimation for longitudinal mixed models for binary data
Holst, René
2009-01-01
a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson...
FAN JianQing; ZHOU Yong; CAI JianWen; CHEN Min
2009-01-01
Multivariate failure time data arise frequently in survival analysis. A commonly used technique is the working independence estimator for marginal hazard models. Two natural questions are how to improve the efficiency of the working independence estimator and how to identify the situations under which such an estimator has high statistical efficiency. In this paper, three weighted estimators are proposed based on three different optimal criteria in terms of the asymptotic covariance of weighted estimators. Simplified close-form solutions are found, which always outperform the working independence estimator. We also prove that the working independence estimator has high statistical efficiency,when asymptotic covariance of derivatives of partial log-likelihood functions is nearly exchangeable or diagonal. Simulations are conducted to compare the performance of the weighted estimator and working independence estimator. A data set from Busselton population health surveys is analyzed using the proposed estimators.
The effective delayed neutron fraction, βeff, and the prompt neutron generation time, Λ, in the point kinetics equation are weighted by the adjoint flux to improve the accuracy of the reactivity estimate. Recently the Monte Carlo (MC) kinetics parameter estimation methods by using the adjoint flux calculated in the MC forward simulations have been developed and successfully applied for reactor analyses. However these adjoint estimation methods based on the cycle-by-cycle genealogical table require a huge memory size to store the pedigree hierarchy. In this paper, we present a new adjoint estimation method in which the pedigree of a single history is utilized by applying the MC Wielandt method. The algorithm of the new method is derived and its effectiveness is demonstrated in the kinetics parameter estimations for infinite homogeneous two-group problems and critical facilities. (author)
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Efficient focusing scheme for transverse velocity estimation using cross-correlation
Jensen, Jørgen Arendt
2001-01-01
The blood velocity can be estimated by cross-correlation of received RE signals, but only the velocity component along the beam direction is found. A previous paper showed that the complete velocity vector can be estimated, if received signals are focused along lines parallel to the direction of...... the flow. Here a weakly focused transmit field was used along with a simple delay-sum beamformer. A modified method for performing the focusing by employing a special calculation of the delays is introduced, so that a focused emission can be used. The velocity estimation was studied through extensive...
Updated estimation of energy efficiencies of U.S. petroleum refineries.
Palou-Rivera, I.; Wang, M. Q. (Energy Systems)
2010-12-08
Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels such as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.
Using Data Envelopment Analysis approach to estimate the health production efficiencies in China
ZHANG Ning; HU Angang; ZHENG Jinghai
2007-01-01
By using Data Envelopment Analysis approach,we treat the health production system in a certain province as a Decision Making Unit (DMU),identify its inputs and outputs,evaluate its technical efficiency in 1982,1990 and 2000 respectively,and further analyze the relationship between efficiency scores and social-environmental variables.This paper has found several interesting findings.Firstly,provinces on frontier in different year are different,but provinces far from the frontier keep unchanged.The average efficiency of health production has made a significant progress from 1982 to 2000.Secondly,all provinces in China can be divided into six categories in terms of health production outcome and efficiency,and each category has specific approach of improving health production efficiency.Thirdly,significant differences in health production efficiencies have been found among the eastern,middle and western regions in China,and among the eastern and middle regions.At last,there is significant positive relationship between population density and health production efficiency but negative relationship (not very significant) between the proportions of public health expenditure in total expense and efficiency.Maybe it is the result of inappropriate tendency of public expenditure.The relationship between abilities to pay for health care services and efficiency in urban areas is opposite to that in rural areas.One possible reason is the totally different income and public services treatments between rural and urban residents.Therefore,it is necessary to adjust health policies and service provisions which are specifically designed to different population groups.
The use of 32P and 15N to estimate fertilizer efficiency in oil palm
Improving efficiency of use of fertilizers has attracted a great deal of interest on oil-palm estates because of increasing input costs. It is assumed that higher efficiency of use of fertilizers for estate crops, including oil palm, would result in significant savings and less environmental pollution. One way to enhance efficiency of use of fertilizers by oil palm is to apply them where the most active roots are located. Previous work has indicated the possibility of determining the most active roots of tea and chinchona by using 32P. In this experiment, 32P was again used, to determine the locations of the most active roots of oil palm trees
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
Li, Ping
2008-01-01
Compressed Counting (CC) was recently proposed for approximating the $\\alpha$th frequency moments of data streams, for $0 1$. Previous studies used the standard algorithm based on {\\em symmetric stable random projections} to approximate the $\\alpha$th frequency moments and the entropy. Based on maximally-skewed stable random projections, Compressed Counting (CC) dramatically improves symmetric stable random projections, especially when $\\alpha\\approx 1$. This study applies CC to estimate the R\\'enyi entropy, the Tsallis entropy, and the Shannon entropy. Our experiments on some Web crawl data demonstrate significant improvements over previous studies. When estimating the frequency moments, the R\\'enyi entropy, and the Tsallis entropy, the improvements of CC, in terms of accuracy, appear to approach "infinity" as $\\alpha\\to1$. When estimating Shannon entropy using R\\'enyi entropy or Tsallis entropy, the improvements of CC, in terms of accuracy, are roughly 20- to 50-fold. When estimating the Shannon entropy fro...
Ryabova, A. V.; Stratonnikov, Aleksandr A.; Loshchenov, V. B.
2006-06-01
A fast and highly informative method is presented for estimating the photodynamic activity of photosensitisers. The method makes it possible to determine the rate of photodegradation in erythrocyte-containing biological media in nearly in vivo conditions, estimate the degree of irreversible binding of oxygen dissolved in the medium during laser irradiation in the presence of photosensitisers, and determine the nature of degradation of photosensitisers exposed to light (photobleaching).
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Robert Aidoo; Fred Nimoh; John-Eudes Andivi Bakang; Kwasi Ohene-Yankyera; Simon Cudjoe Fialor; James Osei Mensah; Robert Clement Abaidoo
2012-01-01
The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain. A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders) in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta) through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency a...
Hutcheson, J P; Johnson, D E; Gerken, C L; Morgan, J B; Tatum, J D
1997-10-01
Six sets of four genetically identical Brangus steers (n = 24; X BW 409 kg) were used to determine the effect of different anabolic implants on visceral organ mass, chemical body composition, estimated tissue deposition, and energetic efficiency. Steers within a clone set were randomly assigned to one of the following implant treatments: C, no implant; E, estrogenic; A, androgenic, or AE, androgenic + estrogenic. Steers were slaughtered 112 d after implanting; visceral organs were weighed and final body composition determined by mechanical grinding and chemical analysis of the empty body. Mass of the empty gastrointestinal tract (GIT) was reduced approximately 9% (P .10) the efficiency of ME utilization. In general, estrogenic implants decreased GIT, androgenic implants increased liver, and all implants increased hide mass. Steers implanted with an AE combination had additive effects on protein deposition compared with either implant alone. The NEg requirements for body gain are estimated to be reduced 19% by estrogenic or combination implants. PMID:9331863
Dina Miftahutdinova
2015-02-01
Full Text Available Purpose: to give the estimation of efficiency of the use of the authorial training program in setup time for the women’s Ukraine rowing team representatives in the process of preparation to Olympic Games in London. Materials and Methods: 10 sportswomen of higher qualification, that are included to Ukraine rowing team, are participated in research. For the estimation of general and special physical preparedness the standard test and rowing ergometre Concept-2 are used. Results: the end of the preparatory period was observed significant improvement significant general and special physical fitness athletes surveyed, and their deviation from the model performance dropped to 5–7%. Conclusions: the high efficiency of the author training program for sportswomen of Ukrainian rowing team are testified and they became the Olympic champions in London.
Chaves, A S; Nascimento, M L; Tullio, R R; Rosa, A N; Alencar, M M; Lanna, D P
2015-10-01
The objective of this study was to examine the relationship of efficiency indices with performance, heart rate, oxygen consumption, blood parameters, and estimated heat production (EHP) in Nellore steers. Eighteen steers were individually lot-fed diets of 2.7 Mcal ME/kg DM for 84 d. Estimated heat production was determined using oxygen pulse (OP) methodology, in which heart rate (HR) was monitored for 4 consecutive days. Oxygen pulse was obtained by simultaneously measuring HR and oxygen consumption during a 10- to 15-min period. Efficiency traits studied were feed efficiency (G:F) and residual feed intake (RFI) obtained by regression of DMI in relation to ADG and midtest metabolic BW (RFI). Alternatively, RFI was also obtained based on equations reported by the NRC's to estimate individual requirement and DMI (RFI calculated by the NRC [1996] equation [RFI]). The slope of the regression equation and its significance was used to evaluate the effect of efficiency indices (RFI, RFI, or G:F) on the traits studied. A mixed model was used considering RFI, RFI, or G:F and pen type as fixed effects and initial age as a covariate. For HR and EHP variables, day was included as a random effect. There was no relationship between efficiency indices and back fat depth measured by ultrasound or daily HR and EHP ( > 0.05). Because G:F is obtained in relation to BW, the slope of G:F was positive and significant ( RFI and RFI ( RFI. Oxygen consumption per beat was not related to G:F; however, it was lower for RFI- and RFI-efficient steers, and consequently, oxygen volume (mL·min·kg) and OP (μL O·beat·kg) were also lower ( RFI and RFI ( > 0.05); however, G:F-efficient steers showed lower hematocrit and hemoglobin concentrations ( < 0.05). Differences in EHP between efficient and inefficient animals were not directly detected. Nevertheless, differences in oxygen consumption and OP were detected, indicating that the OP methodology may be useful to predict growth efficiency. PMID
Valid and efficient manual estimates of intracranial volume from magnetic resonance images
Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T1-weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (≤15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five
D-optimal and D-efficient equivalent-estimation second-order split-plot designs
2010-01-01
Industrial experiments often involve factors which are hard to change or costly to manipulate and thus make it impossible to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical agricultural...
Barr, J. G.; Engel, V.; Fuentes, J. D.; Fuller, D. O.; Kwon, H.
2012-11-01
Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC) systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE) and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
MACHARIA, Harrison; Goos, Peter
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, model estimation for split-plot designs requires the use of generalized least squares (GLS). However, for some split-plot designs (including not only classical ...
Bulkina N.V.
2011-03-01
Full Text Available As a result of the conducted research results of an estimation of efficiency of application of the general eradication are presented therapy and local therapy at sick of inflammatory periodontal diseases against a chronic gastritis. Authors notice a positive effect at application as pathogenetic therapy of balm for gums of "Asepta" that normalisation of level of hygiene of the oral cavity, proof remission of periodontal diseases against a pathology of a gastroenteric path allows to achieve
D. O. Fuller
2012-11-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
Pin-Chih Wang
2014-09-01
Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.
Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation
M. Agatonović
2012-12-01
Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.
Collection of hot electrons generated by the efficient absorption of light in metallic nanostructures, in contact with semiconductor substrates can provide a basis for the construction of solar energy-conversion devices. Herein, we evaluate theoretically the energy-conversion efficiency of systems that rely on internal photoemission processes at metal-semiconductor Schottky-barrier diodes. In this theory, the current-voltage characteristics are given by the internal photoemission yield as well as by the thermionic dark current over a varied-energy barrier height. The Fowler model, in all cases, predicts solar energy-conversion efficiencies of <1% for such systems. However, relaxation of the assumptions regarding constraints on the escape cone and momentum conservation at the interface yields solar energy-conversion efficiencies as high as 1%–10%, under some assumed (albeit optimistic) operating conditions. Under these conditions, the energy-conversion efficiency is mainly limited by the thermionic dark current, the distribution of hot electron energies, and hot-electron momentum considerations
Leenheer, Andrew J.; Narang, Prineha; Atwater, Harry A., E-mail: haa@caltech.edu [Thomas J. Watson Laboratories of Applied Physics, California Institute of Technology, Pasadena, California 91125 (United States); Joint Center for Artificial Photosynthesis, Pasadena, California 91125 (United States); Lewis, Nathan S. [Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125 (United States); Joint Center for Artificial Photosynthesis, Pasadena, California 91125 (United States)
2014-04-07
Collection of hot electrons generated by the efficient absorption of light in metallic nanostructures, in contact with semiconductor substrates can provide a basis for the construction of solar energy-conversion devices. Herein, we evaluate theoretically the energy-conversion efficiency of systems that rely on internal photoemission processes at metal-semiconductor Schottky-barrier diodes. In this theory, the current-voltage characteristics are given by the internal photoemission yield as well as by the thermionic dark current over a varied-energy barrier height. The Fowler model, in all cases, predicts solar energy-conversion efficiencies of <1% for such systems. However, relaxation of the assumptions regarding constraints on the escape cone and momentum conservation at the interface yields solar energy-conversion efficiencies as high as 1%–10%, under some assumed (albeit optimistic) operating conditions. Under these conditions, the energy-conversion efficiency is mainly limited by the thermionic dark current, the distribution of hot electron energies, and hot-electron momentum considerations.
Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators
Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.
2010-04-14
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy
QU Annie; XUE Lan
2009-01-01
@@ In the analysis of correlated data, it is ideal to capture the true dependence structure to increase efficiency of the estimation. However, for multivariate survival data, this is extremely challenge since the martingale residual is involved and often intractable. Fan et al. have made a significant contribution by giving a close-form formula for the optimal weights of the estimating functions such that the asymptotic variance of the estimator is minimized. Since minimizing the variance matrix is not an easy task, several strategies are proposed, such as minimizing the total variance.The most feasible one is to use the diagonal matrix entries as the weighting scheme. We congratulate them on this important work. In the following we discuss implementing of their method and relate our work to theirs.
Binary logistic regression to estimate household income efficiency. (south Darfur rural areas-Sudan
Sofian A. A. Saad
2016-03-01
Full Text Available The main objective behind this study is to find out the main factors that affects the efficiency of household income in Darfur rejoin. The statistical technique of the binary logistic regression has been used to test if there is a significant effect of fife binary explanatory variables against the response variable (income efficiency; sample of size 136 household head is gathered from the relevant population. The outcomes of the study showed that; there is a significant effect of the level of household expenditure on the efficiency of income, beside the size of household also has significant effect on the response variable, the remaining explanatory variables showed no significant effects, those are (household head education level, size of household head own agricultural and numbers of students at school.
Case study in higher school: problems of application and efficiency estimation
Ekimova V.I.
2015-03-01
Full Text Available Case study takes leading positions in training specialists at higher schools in the majority of foreign countries and is regarded as the most efficient way of teaching students how to solve typical professional tasks. The article reviews the general principles and approaches to organization of case study sessions for students. The most interesting and informative resources as well as the most perspective formats of case studies are presented in the article. The review compiles the findings concerning the educational efficiency and developmental potential of this method. The article outlines the perspective of extending the areas of case study application in higher schools' educational process.
Farr, Benjamin; Luijten, Erik
2013-01-01
We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
Northcutt Sally L
2010-04-01
Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Toly Chen
2014-08-01
Full Text Available Cycle time management plays an important role in improving the performance of a wafer fabrication factory. It starts from the estimation of the cycle time of each job in the wafer fabrication factory. Although this topic has been widely investigated, several issues still need to be addressed, such as how to classify jobs suitable for the same estimation mechanism into the same group. In contrast, in most existing methods, jobs are classified according to their attributes. However, the differences between the attributes of two jobs may not be reflected on their cycle times. The bi-objective nature of classification and regression tree (CART makes it especially suitable for tackling this problem. However, in CART, the cycle times of jobs of a branch are estimated with the same value, which is far from accurate. For these reason, this study proposes a joint use of principal component analysis (PCA, CART, and back propagation network (BPN, in which PCA is applied to construct a series of linear combinations of original variables to form new variables that are as unrelated to each other as possible. According to the new variables, jobs are classified using CART before estimating their cycle times with BPNs. A real case was used to evaluate the effectiveness of the proposed methodology. The experimental results supported the superiority of the proposed methodology over some existing methods. In addition, the managerial implications of the proposed methodology are also discussed with an example.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its
An estimation of the Ukrainian Polissya efficiency for the radioactive level in plant products
It was shown that addition to a soil of the sapropel in various doses reduces to a grater extent radiocaesium transfer to the yield of beet root and lupine-by 4 and 5 times respectively in comparison with the control. The sapropel has positive aftereffect. The efficiency of the sapropel is higher on soils depleted in mineral nutritious elements and humus
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
Madsen, U.; Aubertin, G.; Breum, N. O.; Fontaine, J. R.; Nielsen, Peter V.
Numerical modelling of direct capture efficiency of a local exhaust is used to compare the tracer gas technique of a proposed CEN standard against a more consistent approach based on an imaginary control box. It is concluded that the tracer gas technique is useful for field applications....
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin
2013-01-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both, stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretiza- tion of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both, maximum likelihood methods and an improved Monte Carlo...
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
Park, Timothy A.; Loomis, John B.
1992-01-01
This paper empirically tested the three conditions identified by McConnell for equivalence of the linear utility difference model and the valuation function approach to dichotomous choice contingent valuation. Using a contingent valuation survey for deer hunting in California, two of the three conditions were violated. Even though the models are not simple linear transforms of each other for this survey, estimates of mean willingness to pay and their associated 95% confidence intervals around...
Efficient Quantum State Estimation by Continuous Weak Measurement and Dynamical Control
Smith, Greg A; Silberfarb, Andrew; Deutsch, Ivan H.; Jessen, Poul S.
2006-01-01
We demonstrate a fast, robust and non-destructive protocol for quantum state estimation based on continuous weak measurement in the presence of a controlled dynamical evolution. Our experiment uses optically probed atomic spins as a testbed, and successfully reconstructs a range of trial states with fidelities of ~90%. The procedure holds promise as a practical diagnostic tool for the study of complex quantum dynamics, the testing of quantum hardware, and as a starting point for new types of ...
Kovtun, Yu V; Skibenko, A I; Yuferov, V B
2012-01-01
The processes of injection of a sputtered-and-ionized working material into the pulsed reflex discharge plasma have been considered at the initial stage of dense gas-metal plasma formation. A calculation model has been proposed to estimate the parameters of the sputtering mechanism for the required working material to be injected into the discharge. The data obtained are in good accordance with experimental results.
Popkov V.M.; Fomkin R.N.; Blyumberg B.I.
2013-01-01
Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF). Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by ...
Egizio, Victoria B.; Eddy, Michael; Robinson, Matthew; Jennings, J. Richard
2010-01-01
Researchers are interested in respiratory sinus arrhythmia (RSA) as an index of cardiac vagal activity. Yet, debate exists about how to account for respiratory influences on quantitative indices of RSA. Ritz and colleagues (2001) developed a within-individual correction procedure by which the effects of respiration on RSA may be estimated using regression models. We replicated their procedure substituting a spectral high-frequency measure of RSA for a time-domain statistic and a respiratory b...
Efficient parameter estimation in 2D transport models based on an adjoint formalism
An adjoint based optimization procedure is elaborated to estimate transport coefficients for plasma edge models based on a limited set of known profiles at different locations. It is shown that a set of adjoint equations can accurately determine all sensitivities towards transport coefficients at once. A proof of principle is provided on a simple geometry. The methodology is subsequently applied to assess whether a simple edge model can be tuned toward full B2-EIRENE profiles for a JET-configuration. (paper)
The efficiency of different estimation methods of hydro-physical limits
Emma María Martínez; Tomas Serafín Cuesta; Javier José Cancela
2012-01-01
The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP) and field capacity (FC), is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitabi...
An Efficient Deconvolution Algorithm for Estimating Oxygen Consumption During Muscle Activities
Dash, Ranjan K.; Somersalo, Erkki; Cabrera, Marco E; Calvetti, Daniela
2007-01-01
The reconstruction of an unknown input function from noisy measurements in a biological system is an ill-posed inverse problem. Any computational algorithm for its solution must use some kind of regularization technique to neutralize the disastrous effects of amplified noise components on the computed solution. In this paper, following a hierarchical Bayesian statistical inversion approach, we seek estimates for the input function and regularization parameter (hyperparameter) that maximize th...
Tamošiūnas, M.; Jakovels, D.; Lihačovs, A.; Kilikevičius, A.; Baltušnikas, J.; Kadikis, R.; Šatkauskas, S.
2014-10-01
Electroporation and ultrasound induced sonoporation has been showed to induce plasmid DNA transfection to the mice tibialis cranialis muscle. It offers new prospects for gene therapy and cancer treatment. However, numerous experimental data are still needed to deliver the plausible explanation of the mechanisms governing DNA electro- or sono-transfection, as well as to provide the updates on transfection protocols for transfection efficiency increase. In this study we aimed to apply non-invasive optical diagnostic methods for the real time evaluation of GFP transfection levels at the reduced costs for experimental apparatus and animal consumption. Our experimental set-up allowed monitoring of GFP levels in live mice tibialis cranialis muscle and provided the parameters for DNA transfection efficiency determination.
Estimation of Power Efficiency of Combined Heat Pumping Stations in Heat Power Supply Systems
I. I. Matsko
2010-01-01
The paper considers realization of heat pumping technologies advantages at heat power generation for heat supply needs on the basis of combining electric drive heat pumping units with water heating boilers as a part of a combined heat pumping station.The possibility to save non-renewable energy resources due to the combined heat pumping stations utilization instead of water heating boiler houses is shown in the paper.The calculation methodology for power efficiency for introduction of combine...
McGuire, Kimberly; de Croon, Guido; de Wagter, Christophe; Remes, Bart; Tuyls, Karl; Kappen, Hilbert
2016-01-01
Autonomous flight of pocket drones is challenging due to the severe limitations on on-board energy, sensing, and processing power. However, tiny drones have great potential as their small size allows maneuvering through narrow spaces while their small weight provides significant safety advantages. This paper presents a computationally efficient algorithm for determining optical flow, which can be run on an STM32F4 microprocessor (168 MHz) of a 4 gram stereo-camera. The optical flow algorithm ...
SiC modular multilevel converters: sub-module voltage ripple analysis and efficiency estimations
Perez Basante, Angel; Pou Félix, Josep; Ceballos Recio, Salvador; Gil De Muro, Asier; Pujana, Ainhoa; Ibañez, Pedro
2014-01-01
Two important technical challenges associated with the Modular Multilevel Converter (MMC) are the reduction of the voltage ripple of the Sub-Module (SM) capacitors and the reduction of the converter losses. This paper conducts a study focused on these two topics. Firstly, the effect of a circulating current with a predefined second harmonic on the SM voltage ripple is assessed. Secondly, an efficiency study for two different MMCs, one with silicon (Si) and the other with silicon carbide (SiC)...
A. Rosati; DeJong, T M
2003-01-01
It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, ‘daily’ photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthes...
Efficient estimation in the bivariate normal copula model: normal margins are least favourable
Klaassen, Chris A. J.; Wellner, Jon A.
1997-01-01
Consider semi-parametric bivariate copula models in which the family of copula functions is parametrized by a Euclidean parameter θ of interest and in which the two unknown marginal distributios are the (infinite-dimensional) nuisance parameters. The efficient score for θ can be characterized in terms of the solutions of two coupled Sturm-Liouville equations. Where the family of copula functions corresponds to the normal distributios with mean 0, variance 1 and correlation θ, the solution of ...
Simoncini, David; Zhang, Kam Y. J.
2013-01-01
Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex e...
Massive MIMO Systems With Non-Ideal Hardware: Energy Efficiency, Estimation, and Capacity Limits
Bjornson, Emil; Hoydis, Jakob; Kountouris, Marios; Debbah, Merouane
2014-01-01
The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little inter-user interference. Since these results rely on asymptotics, it is...
This paper describes the EPA's voluntary ENERGY STAR program and the results of the automobile manufacturing industry's efforts to advance energy management as measured by the updated ENERGY STAR Energy Performance Indicator (EPI). A stochastic single-factor input frontier estimation using the gamma error distribution is applied to separately estimate the distribution of the electricity and fossil fuel efficiency of assembly plants using data from 2003 to 2005 and then compared to model results from a prior analysis conducted for the 1997–2000 time period. This comparison provides an assessment of how the industry has changed over time. The frontier analysis shows a modest improvement (reduction) in “best practice” for electricity use and a larger one for fossil fuels. This is accompanied by a large reduction in the variance of fossil fuel efficiency distribution. The results provide evidence of a shift in the frontier, in addition to some “catching up” of poor performing plants over time. - Highlights: • A non-public dataset of U.S. auto manufacturing plants is compiled. • A stochastic frontier with a gamma distribution is applied to plant level data. • Electricity and fuel use are modeled separately. • Comparison to prior analysis reveals a shift in the frontier and “catching up”. • Results are used by ENERGY STAR to award energy efficiency plant certifications
Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.
2016-09-01
Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.
Kato, M.; Hachisu, I.
1999-01-01
We have calculated the mass accumulation efficiency during helium shell flashes to examine whether or not a carbon-oxygen white dwarf (C+O WD) grows up to the Chandrasekhar mass limit to ignite a Type Ia supernova explosion. It has been frequently argued that luminous super-soft X-ray sources and symbiotic stars are progenitors of SNe Ia. In such systems, a C+O WD accretes hydrogen-rich matter from a companion and burns hydrogen steadily on its surface. The WD develops a helium layer undernea...
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin; Noé, Frank
2013-04-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretization of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both maximum likelihood methods and an improved Monte Carlo sampling method for reversible transition matrices with fixed stationary distribution are given. The sampling approach is applied to a toy example as well as to simulations of the MR121-GSGS-W peptide, and is demonstrated to converge much more rapidly than a previous approach of Noé [J. Chem. Phys. 128, 244103 (2008), 10.1063/1.2916718].
An efficient Bandwidth Demand Estimation for Delay Reduction in IEEE 802.16j MMR WiMAX Networks
Fath Elrahman Ismael
2010-01-01
Full Text Available IEEE 802.16j MMR WiMAX networks allow the number of hops between the user andthe MMR-BS to be more than two hops. The standard bandwidth request procedure inWiMAX network introduces much delay to the user data and acknowledgement of theTCP packet that affects the performance and throughput of the network. In this paper,we propose a new scheduling scheme to reduce the bandwidth request delay in MMRnetworks. In this scheme, the MMR-BS allocates bandwidth to its direct subordinate RSswithout bandwidth request using Grey prediction algorithm to estimate the requiredbandwidth of each of its subordinate RS. Using this architecture, the access RS canallocate its subordinate MSs the required bandwidth without notification to the MMR-BS.Our scheduling architecture with efficient bandwidth demand estimation able to reducedelay significantly.
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert
Korniyenko S.V.
2011-12-01
Full Text Available One of priority directions in modern building is maintenance of energy efficiency of buildings and constructions. This problem can be realized by perfection of architectural, constructive and technical decisions. The particular interest is represented by an influence estimation of temperature and moisture mode of enclosing structures on a thermal performance and energy efficiency of buildings. The analysis of the data available in the literature has shown absence of effective calculation methods of temperature and moisture mode in edge zones of enclosing structures that complicates the decision of this problem.The purpose of the given work is an estimation of edge zones influence on a thermal performance and energy efficiency of buildings. The design procedure of energy parameters of a building for the heating period, realized in the computer program is developed. The given technique allows settling an invoice power inputs on heating, hot water supply, an electrical supply. Power inputs on heating include conduction heat-losses through an envelope of a building taking into account edge zones, ventilation heat-losses and leakage air (infiltration, internal household thermal emissions, heat-receipt from solar radiation. On an example it is shown that the account of edge zones raises conduction heat-losses through an envelope of a building on 37 %, the expense of thermal energy on building heating on 32 %, and the expense thermal and electric energy on 13 %. Consequently, thermal and moisture mode in edge zones of enclosing structures makes essential impact on building power consumption. Perfection of the constructive decision leads to decrease of transmission heat-losses through an envelope of a building on 29 %, the expense of thermal energy on building heating on 25 %, the expense of thermal and electric energy on 10 %. Thus, perfection of edge zones of enclosing structures has high potential of energy efficiency.
Estimation of spray system efficiency in case of loss in coolant severe accident condition
The results of pressurize surge line double ended break accident analysis in case of failure of ECCS at Armenian NPP are presented. Based on the analysis results the assessment of spray system efficiency on decreasing confinement pressure and amount radioactive material is carried out. Hydrogen behavior in confinement is analyzed. The occurrence of conditions for possible hydrogen burning in the confinement is assessed as well. Likelihood of accident is in the range of 10-7. However for accident analysis purposes of such kind of accidents needs to be taken into account. The analysis shows that the main contributor in release decrease is spray system availability factor. Unavailability of spray system could lead to the increase of radioactive release by factor 8
Estimation of Power Efficiency of Combined Heat Pumping Stations in Heat Power Supply Systems
I. I. Matsko
2014-07-01
Full Text Available The paper considers realization of heat pumping technologies advantages at heat power generation for heat supply needs on the basis of combining electric drive heat pumping units with water heating boilers as a part of a combined heat pumping station.The possibility to save non-renewable energy resources due to the combined heat pumping stations utilization instead of water heating boiler houses is shown in the paper.The calculation methodology for power efficiency for introduction of combined heat pumping stations has been developed. The seasonal heat needs depending on heating system temperature schedule, a low potential heat source temperature and regional weather parameters are taken into account in the calculations.
An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation
Mingwei Shen
2014-09-01
Full Text Available In this paper, an efficient direct data domain space-time adaptive processing (STAP algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results.
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
F.Y.Wu; Y.H.Zhou; F.Tong; R.Kastner
2013-01-01
Underwater acoustic channels are recognized for being one of the most difficult propagation media due to considerable difficulties such as:multipath,ambient noise,time-frequency selective fading.The exploitation of sparsity contained in underwater acoustic channels provides a potential solution to improve the performance of underwater acoustic channel estimation.Compared with the classic l0 and l1 norm constraint LMS algorithms,the p-norm-like (lP) constraint LMS algorithm proposed in our previous investigation exhibits better sparsity exploitation performance at the presence of channel variations,as it enables the adaptability to the sparseness by tuning of p parameter.However,the decimal exponential calculation associated with the p-norm-like constraint LMS algorithm poses considerable limitations in practical application.In this paper,a simplified variant of the p-norm-like constraint LMS was proposed with the employment of Newton iteration method to approximate the decimal exponential calculation.Numerical simulations and the experimental results obtained in physical shallow water channels demonstrate the effectiveness of the proposed method compared to traditional norm constraint LMS algorithms.
A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency
Lehr, W. [National Oceanic and Atmospheric Administration, Seattle, WA (United States)
2001-07-01
Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig.
A simplified model of natural and mechanical removal to estimate cleanup equipment efficiency
Oil spill response organizations rely on modelling to make decisions in offshore response operations. Models are used to test different cleanup strategies and to measure the expected cost of cleanup and the reduction in environmental impact. The oil spill response community has traditionally used the concept of worst case scenario in developing contingency plans for spill response. However, there are many drawbacks to this approach. The Hazardous Materials Response Division of the National Oceanic and Atmospheric Administration in Cooperation with the U.S. Navy Supervisor of Salvage and Diving has developed a Trajectory Analysis Planner (TAP) which will give planners the tool to try out different cleanup strategies and equipment configurations based upon historical wind and current conditions instead of worst-case scenarios. The spill trajectory model is a classic example in oil spill modelling that uses advanced non-linear three-dimensional hydrodynamical sub-models to estimate surface currents under conditions where oceanographic initial conditions are not accurately known and forecasts of wind stress are unreliable. In order to get better answers, it is often necessary to refine input values rather than increasing the sophistication of the hydrodynamics. This paper described another spill example where the level of complexity of the algorithms needs to be evaluated with regard to the reliability of the input, the sensitivity of the answers to input and model parameters, and the comparative reliability of other algorithms in the model. 9 refs., 1 fig
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Jeyarajan Thiyagalingam
2011-01-01
Full Text Available Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results.
Efficient architecture for global elimination algorithm for H.264 motion estimation
P Muralidhar; C B Ramarao
2016-01-01
This paper presents a fast block matching motion esti mation algorithm and its architecture. The proposed architecture is based on Global Elimination (GE) Algorithm, which uses pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. GE uses a preprocessing stage which can skip unnecessary Sum Absolute Difference (SAD) calculations by comparing minimum SAD with sub-sampled SAD (SSAD). In the second stage SAD is computed at roughly matched candidate positions. GE algorithm uses fixed sub-block sizes and shapes to compute SSAD values in preprocessing stage. Complexity of this GE algorithm is further reduced by adaptively changing the sub-block sizes depending on the macroblock features. In this paper adaptive Global Elimination algorithm has been implemented which reduces the computational complexity of motion estimation algorithm and thus resulted in low power dissipation. Proposed architecture achieved 60% less number of computations compared to existing full search architecture and 50% high throughput compared to existing fixed Global Elimination Architecture.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future. PMID:24691198