Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
Bootstrap Estimation for Nonparametric Efficiency Estimates
1995-01-01
This paper develops a consistent bootstrap estimation procedure to obtain confidence intervals for nonparametric measures of productive efficiency. Although the methodology is illustrated in terms of technical efficiency measured by output distance functions, the technique can be easily extended to other consistent nonparametric frontier models. Variation in estimated efficiency scores is assumed to result from variation in empirical approximations to the true boundary of the production set. ...
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Directory of Open Access Journals (Sweden)
Chuanhua Wei
2015-07-01
Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.
Management systems efficiency estimation in tourism organizations
Alexandra I. Mikheyeva
2011-01-01
The article is concerned with management systems efficiency estimation in tourism organizations, examines effective management systems requirements and characteristics in tourism organizations and takes into account principles of management systems formation.
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2017-01-01
estimator has a high computational complexity. In this paper, we propose an algorithm for lowering this complexity significantly by showing that the NLS estimator can be computed efficiently by solving two Toeplitz-plus-Hankel systems of equations and by exploiting the recursive-in-order matrix structures...
Efficient estimation of rare-event kinetics
Trendelkamp-Schroer, Benjamin
2014-01-01
The efficient calculation of rare-event kinetics in complex dynamical systems, such as the rate and pathways of ligand dissociation from a protein, is a generally unsolved problem. Markov state models can systematically integrate ensembles of short simulations and thus effectively parallelize the computational effort, but the rare events of interest still need to be spontaneously sampled in the data. Enhanced sampling approaches, such as parallel tempering or umbrella sampling, can accelerate the computation of equilibrium expectations massively - but sacrifice the ability to compute dynamical expectations. In this work we establish a principle to combine knowledge of the equilibrium distribution with kinetics from fast "downhill" relaxation trajectories using reversible Markov models. This approach is general as it does not invoke any specific dynamical model, and can provide accurate estimates of the rare event kinetics. Large gains in sampling efficiency can be achieved whenever one direction of the proces...
Microbiological estimate of parodontitis laser therapy efficiency
Mamedova, F. M.; Akbarova, Ju. A.; Bajenov, L. G.; Arslanbekov, T. U.
1995-04-01
In this work was carried out microbiological estimate the efficiency of ultraviolet and He-Ne laser radiation at the treatment of parodontitis. 90 persons was investigated with parodontitis of middle serious diagnosis. The optimal regimes of ultraviolet radiation influence on various micro-organisms discharged from pathologic tooth pocket (PTP) were determined. On the base of specils microflora composition study and data of microbic PTP dissemination owe may conclude that the complex He- Ne and ultraviolet laser radiation show the most pronounced antimicrobic effect.
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric m...
Efficient estimation for ergodic diffusions sampled at high frequency
DEFF Research Database (Denmark)
Sørensen, Michael
of estimators including most of the pre- viously proposed estimators for diffusion processes, for instance GMM-estimators and the maximum likelihood estimator. Simple conditions are given that ensure rate optimality, where estimators of parameters in the diffusion coefficient converge faster than estimators...... of parameters in the drift coefficient, and for efficiency. The conditions turn out to be equal to those implying small Δ-optimality in the sense of Jacobsen and thus gives an interpretation of this concept in terms of classical sta- tistical concepts. Optimal martingale estimating functions in the sense...... of Godambe and Heyde are shown to be give rate optimal and efficient estimators under weak conditions....
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Quantum enhanced estimation of optical detector efficiencies
Directory of Open Access Journals (Sweden)
Barbieri Marco
2016-01-01
Full Text Available Quantum mechanics establishes the ultimate limit to the scaling of the precision on any parameter, by identifying optimal probe states and measurements. While this paradigm is, at least in principle, adequate for the metrology of quantum channels involving the estimation of phase and loss parameters, we show that estimating the loss parameters associated with a quantum channel and a realistic quantum detector are fundamentally different. While Fock states are provably optimal for the former, we identify a crossover in the nature of the optimal probe state for estimating detector imperfections as a function of the loss parameter using Fisher information as a benchmark. We provide theoretical results for on-off and homodyne detectors, the most widely used detectors in quantum photonics technologies, when using Fock states and coherent states as probes.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...
How efficient is estimation with missing data?
DEFF Research Database (Denmark)
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data...... train a Gaussian mixture model (GMM). We test the trained GMM for two cases, in which test dataset is missing or complete. The results show that CEM is the most efficient method in both cases while MI is the worst performer of the three. PW and CEM proves to be more stable, in particular for higher MDP...... percentages (MDP) using a missing completely at random (MCAR) scheme. We compare three MDTs: pairwise deletion (PW), mean imputation (MI) and a maximum likelihood method that we call complete expectation maximization (CEM). We use a synthetic dataset, the Iris dataset and the Pima Indians Diabetes dataset. We...
Efficient Quantum State Estimation with Over-complete Tomography
Zhang, Chi; Xiang, Guo-Yong; Zhang, Yong-Sheng; Li, Chuan-Feng; Guo, Guang-Can
2011-01-01
It is widely accepted that the selection of measurement bases can affect the efficiency of quantum state estimation methods, precision of estimating an unknown state can be improved significantly by simply introduce a set of symmetrical measurement bases. Here we compare the efficiencies of estimations with different numbers of measurement bases by numerical simulation and experiment in optical system. The advantages of using a complete set of symmetrical measurement bases are illustrated mor...
Efficient estimation under privacy restrictions in the disclosure problem
Albers, Willem
1984-01-01
In the disclosure problem already collected data are disclosed only to such extent that the individual privacy is protected to at least a prescribed level. For this problem estimators are introduced which are both simple and efficient.
A semiparametric efficient estimator in case-control studies
Ma, Yanyuan
2010-01-01
We construct a semiparametric estimator in case-control studies where the gene and the environment are assumed to be independent. A discrete or continuous parametric distribution of the genes is assumed in the model. A discrete distribution of the genes can be used to model the mutation or presence of certain group of genes. A continuous distribution allows the distribution of the gene effects to be in a finite-dimensional parametric family and can hence be used to model the gene expression levels. We leave the distribution of the environment totally unspecified. The estimator is derived through calculating the efficiency score function in a hypothetical setting where a close approximation to the samples is random. The resulting estimator is proved to be efficient in the hypothetical situation. The efficiency of the estimator is further demonstrated to hold in the case-control setting as well.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...... bound. Experiments on real-life signals indicate the applicability of the methods in practical low local signal-to-noise ratios....
Efficient sensor placement for state estimation in structural dynamics
Hernandez, Eric M.
2017-02-01
This paper derives a computationally efficient algorithm to determine optimal sequential sensor placement for state estimation in linear structural systems subjected to unmeasured excitations and noise contaminated measurements. The proposed algorithm is developed within the context of the Kalman filter and it minimizes the variance of the state estimate among all possible sequential sensor locations. The paper investigates the effects of measurement type, covariance matrix partition selection, spatial correlation of excitation and model selection on optimal sensor placement. The paper shows that the sequential approach reaches the optimal sensor placement as the number of sensor increases.
ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
ZhuZhongyi; WeiBocheng
1999-01-01
In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
On a robust and efficient maximum depth estimator
Institute of Scientific and Technical Information of China (English)
ZUO YiJun; LAI ShaoYong
2009-01-01
The best breakdown point robustness is one of the most outstanding features of the univariate median. For this robustness property, the median, however, has to pay the price of a low efficiency at normal and other light-tailed models. Affine equivariant multivariate analogues of the univariate median with high breakdown points were constructed in the past two decades. For the high breakdown robustness, most of them also have to sacrifice their efficiency at normal and other models,nevertheless. The affine equivariant maximum depth estimator proposed and studied in this paper turns out to be an exception. Like the univariate median, it also possesses a highest breakdown point among all its multivariate competitors. Unlike the univariate median, it is also highly efficient relative to the sample mean at normal and various other distributions, overcoming the vital low-efficiency shortcoming of the univariate and other multivariate generalized medians. The paper also studies the asymptotics of the estimator and establishes its limit distribution without symmetry and other strong assumptions that are typically imposed on the underlying distribution.
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Simplified procedure for the estimation of Rankine power cycle efficiency
Energy Technology Data Exchange (ETDEWEB)
Patwardhan, V.R.; Devotta, S.; Patwardhan, V.S. (National Chemical Lab., Poona (India))
1989-01-01
A simplified procedure for estimating the Rankine power cycle efficiency eta{sub R} is presented. This procedure does not need any detailed thermodynamic data but requires only the liquid specific heat and the latent heat of vaporization at boiler temperature. This procedure is tested for its application to eight potential Rankine power cycle working fluids for which exact eta{sub R} values have been reported based on detailed thermodynamic data. A fairly wide range of condensing and boiling temperatures is covered. The results indicate that the present procedure can predict eta{sub R} values within +- 1%. (author).
Efficient Timing and Frequency Offset Estimation Scheme for OFDM Systems
Institute of Scientific and Technical Information of China (English)
GUO Yi; GE Jianhua; LIU Gang; ZHANG Wujun
2009-01-01
A new training symbol weighted by pseudo-noise(PN) sequence is designed and an efficient timing and fre quency offset estimation scheme for orthogonal frequency division multiplcxing(OFDM)systems is proposed.The timing synchronization is accomplished by using the piecewise symmetric conjugate of the primitive training symbol and the good autocorrelation of PN weighted factor.The frequency synchronization is finished by utilizing the training symbol whose PN weighted factor is removed after the timing synchronization.Compared with conventional schemes,the proposed scheme can achieve a smaller mean square error and provide a wider frequency acquisition range.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Efficient Spectral Power Estimation on an Arbitrary Frequency Scale
Directory of Open Access Journals (Sweden)
F. Zaplata
2015-04-01
Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
Mossel, Elchanan
2010-01-01
We propose a simple and efficient Bayesian model of iterative learning on social networks. This model is efficient in two senses: the process both results in an optimal belief, and can be carried out with modest computational resources for large networks. This result extends Condorcet's Jury Theorem to general social networks, while preserving rationality and computational feasibility. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement of S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard dev...
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
Efficient mental workload estimation using task-independent EEG features
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Efficient estimation of energy transfer efficiency in light-harvesting complexes.
Shabani, A; Mohseni, M; Rabitz, H; Lloyd, S
2012-07-01
The fundamental physical mechanisms of energy transfer in photosynthetic complexes is not yet fully understood. In particular, the degree of efficiency or sensitivity of these systems for energy transfer is not known given their realistic with surrounding photonic and phononic environments. One major problem in studying light-harvesting complexes has been the lack of an efficient method for simulation of their dynamics in biological environments. To this end, here we revisit the second order time-convolution (TC2) master equation and examine its reliability beyond extreme Markovian and perturbative limits. In particular, we present a derivation of TC2 without making the usual weak system-bath coupling assumption. Using this equation, we explore the long-time behavior of exciton dynamics of Fenna-Matthews-Olson (FMO) portein complex. Moreover, we introduce a constructive error analysis to estimate the accuracy of TC2 equation in calculating energy transfer efficiency, exhibiting reliable performance for system-bath interactions with weak and intermediate memory and strength. Furthermore, we numerically show that energy transfer efficiency is optimal and robust for the FMO protein complex of green sulfur bacteria with respect to variations in reorganization energy and bath correlation time scales.
An approach to estimate radioadaptation from DSB repair efficiency.
Yatagai, Fumio; Sugasawa, Kaoru; Enomoto, Shuichi; Honma, Masamitsu
2009-09-01
In this review, we would like to introduce a unique approach for the estimation of radioadaptation. Recently, we proposed a new methodology for evaluating the repair efficiency of DNA double-strand breaks (DSB) using a model system. The model system can trace the fate of a single DSB, which is introduced within intron 4 of the TK gene on chromosome 17 in human lymphoblastoid TK6 cells by the expression of restriction enzyme I-SceI. This methodology was first applied to examine whether repair of the DSB (at the I-SceI site) can be influenced by low-dose, low-dose rate gamma-ray irradiation. We found that such low-dose IR exposure could enhance the activity of DSB repair through homologous recombination (HR). HR activity was also enhanced due to the pre-IR irradiation under the established conditions for radioadaptation (50 mGy X-ray-6 h-I-SceI treatment). Therefore, radioadaptation might account for the reduced frequency of homozygous loss of heterozygosity (LOH) events observed in our previous experiment (50 mGy X-ray-6 h-2 Gy X-ray). We suggest that the present evaluation of DSB repair using this I-SceI system, may contribute to our overall understanding of radioadaptation.
Efficient estimation of analytic density under random censorship
Belitser, E.
2001-01-01
The nonparametric minimax estimation of an analytic density at a given point, under random censorship, is considered. Although the problem of estimating density is known to be irregular in a certain sense, we make some connections relating this problem to the problem of estimating smooth functionals
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Estimation of the Asian telecommunication technical efficiencies with panel data
Institute of Scientific and Technical Information of China (English)
YANG Yu-yong; JIA Huai-jing
2007-01-01
This article used panel data and the Stochastic frontier analysis (SFA) model to analyze and compare the technical efficiencies of the telecommunication industry in 28 Asian countries from 1994 to 2003. In conclusion, the technical efficiencies of the Asian countries were found to steadily increase in the past decade. The high-income countries have the highest technical efficiency; however, income is not the only factor that affects the technical efficiency.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Institute of Scientific and Technical Information of China (English)
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Efficient collaborative sparse channel estimation in massive MIMO
Masood, Mudassir
2015-08-12
We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.
Control grid motion estimation for efficient application of optical flow
Zwart, Christine M
2012-01-01
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable compu
Efficient estimation for high similarities using odd sketches
DEFF Research Database (Denmark)
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
Estimating set similarity is a central problem in many computer applications. In this paper we introduce the Odd Sketch, a compact binary sketch for estimating the Jaccard similarity of two sets. The exclusive-or of two sketches equals the sketch of the symmetric difference of the two sets. This ...
Efficient and Accurate Robustness Estimation for Large Complex Networks
Wandelt, Sebastian
2016-01-01
Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.
Efficient Estimation of Mutual Information for Strongly Dependent Variables
Gao, Shuyang; Galstyan, Aram
2014-01-01
We demonstrate that a popular class of nonparametric mutual information (MI) estimators based on k-nearest-neighbor graphs requires number of samples that scales exponentially with the true MI. Consequently, accurate estimation of MI between two strongly dependent variables is possible only for prohibitively large sample size. This important yet overlooked shortcoming of the existing estimators is due to their implicit reliance on local uniformity of the underlying joint distribution. We introduce a new estimator that is robust to local non-uniformity, works well with limited data, and is able to capture relationship strengths over many orders of magnitude. We demonstrate the superior performance of the proposed estimator on both synthetic and real-world data.
A new approach for estimating the efficiencies of the nucleotide substitution models.
Som, Anup
2007-04-01
In this article, a new approach is presented for estimating the efficiencies of the nucleotide substitution models in a four-taxon case and then this approach is used to estimate the relative efficiencies of six substitution models under a wide variety of conditions. In this approach, efficiencies of the models are estimated by using a simple probability distribution theory. To assess the accuracy of the new approach, efficiencies of the models are also estimated by using the direct estimation method. Simulation results from the direct estimation method confirmed that the new approach is highly accurate. The success of the new approach opens a unique opportunity to develop analytical methods for estimating the relative efficiencies of the substitution models in a straightforward way.
Technical Efficiency of Australian Wool Production: Point and Confidence Interval Estimates
2002-01-01
A balanced panel of data is used to estimate technical efficiency, employing a fixed-effects stochastic frontier specification for wool producers in Australia. Both point estimates and confidence intervals for technical efficiency are reported. The confidence intervals are constructed using the Multiple Comparisons with the Best (MCB) procedure of Horrace and Schmidt (2000). The confidence intervals make explicit the precision of the technical efficiency estimates and underscore the dangers o...
Transverse correlation: An efficient transverse flow estimator - initial results
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Henze, Lasse; Kortbek, Jacob
2008-01-01
of vascular hemodynamics, the flow angle cannot easily be found as the angle is temporally and spatially variant. Additionally the precision of traditional methods is severely lowered for high flow angles, and they breakdown for a purely transverse flow. To overcome these problems we propose a new method...... for estimating the transverse velocity component. The method measures the transverse velocity component by estimating the transit time of the blood between two parallel lines beamformed in receive. The method has been investigated using simulations performed with Field II. Using 15 emissions per estimate...
Indexes of estimation of efficiency of the use of intellectual resources of industrial enterprises
Directory of Open Access Journals (Sweden)
Audzeichyk Olga
2015-12-01
Full Text Available The article researches the theoretical and practical aspects of estimation of intellectual resources of industrial enterprises and proposes the method of estimation of efficiency of the use of intellectual resources.
System of Indicators in Social and Economic Estimation of the Regional Energy Efficiency
Directory of Open Access Journals (Sweden)
Ivan P. Danilov
2012-10-01
Full Text Available The article offers social and economic interpretation of the energy efficiency, modeling of the system of indicators in estimation of the regional social and economic efficiency of the energy resources use.
Thermodynamics estimation of copper plasma efficiency from secondary raw material
Directory of Open Access Journals (Sweden)
Віктор Сергійович Козьмін
2014-09-01
Full Text Available The results of the thermodynamic evaluation of oxidative plasma copper refining efficiency recycled from impurities present in the feedstock are shown. It was established that the type of impurity factor increasing the efficiency of the plasma refining the potential change of Gibbs varies from 1,4 to 4, 8, and for silver, and of gold there is a transition from an unlikely to real positive state.
Determination of feed efficiency requires estimates of intake and digestibility of the diet, but they are difficult to measure on pasture. The objective of this research was to determine if plants cuticular alkanes were suitable as markers to estimate intake and diet digestibility of grazing cows wi...
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
It has been suggested that the level corresponding to the knee-point of the basilar membrane (BM) input/output (I/O) function can be used to estimate the amount of inner- and outer hair-cell loss (IHL, OHL) in listeners with a moderate cochlear hearing impairment Plack et al. (2004). According...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...... estimates of the knee-point level. Further, it is explored whether it is possible to estimate the compression ratio using only on-frequency TMCs. 10 normal-hearing and 10 hearing-impaired listeners (with mild-to-moderate sensorineural hearing loss) were tested at 1, 2 and 4 kHz. The results showed...
Energy-Efficient Channel Estimation in MIMO Systems
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Efficient estimation of burst-mode LDA power spectra
DEFF Research Database (Denmark)
Velte, Clara Marika; George, William K
2010-01-01
requirements for good statistical convergence due to the random sampling of the data. In the present work, the theory for estimating burst-mode LDA spectra using residence time weighting is discussed and a practical estimator is derived and applied. A brief discussion on the self-noise in spectra...... (axisymmetric turbulent jet). The burst-mode LDA spectra are compared to corresponding spectra from hot-wire data obtained in the same experiments, and to LDA spectra produced by the sample-and-hold methodology. The spectra computed from the residence-time weighted burst-mode algorithm proposed herein compare...
Efficient probabilistic planar robot motion estimation given pairs of images
Booij, O.; Kröse, B.; Zivkovic, Z.
2010-01-01
Estimating the relative pose between two camera positions given image point correspondences is a vital task in most view based SLAM and robot navigation approaches. In order to improve the robustness to noise and false point correspondences it is common to incorporate the constraint that the robot m
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Directory of Open Access Journals (Sweden)
David Svec
2015-03-01
Full Text Available We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1–16 qPCR replicates per concentration and we tested 2–10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1 one robust standard curve with at least 3–4 qPCR replicates at each concentration shall be generated, (2 the efficiency is instrument dependent, but reproducibly stable on one platform, and (3 using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range.
Efficient, Non-Iterative Estimator for Imaging Contrast Agents With Spectral X-Ray Detectors.
Alvarez, Robert E
2016-04-01
An estimator to image contrast agents and body materials with x-ray spectral measurements is described. The estimator is usable with the three or more basis functions that are required to represent the attenuation coefficient of high atomic number materials. The estimator variance is equal to the Cramèr-Rao lower bound (CRLB) and it is unbiased. Its parameters are computed from measurements of a calibration phantom with the clinical x-ray system and it is non-iterative. The estimator is compared with an iterative maximum likelihood estimator. The estimator first computes a linearized maximum likelihood estimate of the line integrals of the basis set coefficients. Corrections for errors in the initial estimates are computed by interpolation with calibration phantom data. The final estimate is the initial estimate plus the correction. The performance of the estimator is measured using a Monte Carlo simulation. Random photon counting with pulse height analysis data are generated. The mean squared errors of the estimates are compared to the CRLB. The random data are also processed with an iterative maximum likelihood estimator. Previous implementations of iterative estimators required advanced physics instruments not usually available in clinical institutions. The estimator mean squared error is essentially equal to the CRLB. The estimator outputs are close to those of the iterative estimator but the computation time is approximately 180 times shorter. The estimator is efficient and has advantages over alternate approaches such as iterative estimators.
Estimation of Nitrogen Fertilizer Use Efficiency in Dryland Agroecosystem
Institute of Scientific and Technical Information of China (English)
LI Shi-qing; LI Sheng-xiu
2001-01-01
A field trial was carried out to study the nitrogen fertilizer recovery by four crops in succession in manurial loess soil in Yangling. The results showed that the nitrogen fertilizer not only had the significant effects on the first crop , but also had longer residual effects, even on the fourth crop. The average apparent nitrogen fertilizer recovery by the first crop was 31.7%, and the accumulative nitrogen recovery by the 4 crops was high as 62.3%, and the latter was double as the former. It is quite clear that the nitrogen fertilizer recovery by the first crop was not reliable for estimating the nitrogen fertilizer unless the residual effect of nitrogen fertilizer was included.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Directory of Open Access Journals (Sweden)
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Ionization efficiency estimations for the SPES surface ion source
Manzolaro, M.; Andrighetto, A.; Meneghetti, G.; Rossignoli, M.; Corradetti, S.; Biasetto, L.; Scarpa, D.; Monetti, A.; Carturan, S.; Maggioni, G.
2013-12-01
Ion sources play a crucial role in ISOL (Isotope Separation On Line) facilities determining, with the target production system, the ion beam types available for experiments. In the framework of the SPES (Selective Production of Exotic Species) INFN (Istituto Nazionale di Fisica Nucleare) project, a preliminary study of the alkali metal isotopes ionization process was performed, by means of a surface ion source prototype. In particular, taking into consideration the specific SPES in-target isotope production, Cs and Rb ion beams were produced, using a dedicated test bench at LNL (Laboratori Nazionali di Legnaro). In this work the ionization efficiency test results for the SPES Ta surface ion source prototype are presented and discussed.
Efficient Quantile Estimation for Functional-Coefficient Partially Linear Regression Models
Institute of Scientific and Technical Information of China (English)
Zhangong ZHOU; Rong JIANG; Weimin QIAN
2011-01-01
The quantile estimation methods are proposed for functional-coefficient partially linear regression (FCPLR) model by combining nonparametric and functional-coefficient regression (FCR) model.The local linear scheme and the integrated method are used to obtain local quantile estimators of all unknown functions in the FCPLR model.These resulting estimators are asymptotically normal,but each of them has big variance.To reduce variances of these quantile estimators,the one-step backfitting technique is used to obtain the efficient quantile estimators of all unknown functions,and their asymptotic normalities are derived.Two simulated examples are carried out to illustrate the proposed estimation methodology.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
Institute of Scientific and Technical Information of China (English)
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
The Ubiquitous Role of f’/f in Efficiency Robust Estimation of Location.
1980-08-01
coefficient between the estimators gave the efficiency of T2 . Noether (1955), Hajek (1962), and van Eeden (1963) extended this result and showed in... Noether , G. E. (1955). "On a theorem of Pitman", Ann. Math. Statist. 26, 64-68. Rivest, L. P. (1978). On a Class of Estimators of the Location
Kashnikova, S N; Shcherbakov, P L; Kashnikov, V V; Tatarinov, P A; Shcherbakova, M Iu
2008-01-01
In this article farmakoekonomical analysis of efficiency of various, the most common in Russia schemes of eradication therapy disoders, associated with H. pylori infection is given to by authors basis on their private experience. All-round studying of the different economical factors influencing on a cost of used schemes is realized, and result of spent complex efficiency eradication therapy is estimated.
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
Efficiency assessment of using satellite data for crop area estimation in Ukraine
Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga
2014-06-01
The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.
Computationally efficient DOD and DOA estimation for bistatic MIMO radar with propagator method
Zhang, Xiaofei; Wu, Hailang; Li, Jianfeng; Xu, Dazhuan
2012-09-01
In this article, we consider a computationally efficient direction of departure and direction of arrival estimation problem for a bistatic multiple-input multiple-output (MIMO) radar. The computational loads of the propagator method (PM) can be significantly smaller since the PM does not require any eigenvalue decomposition of the cross correlation matrix and singular value decomposition of the received data. An improved PM algorithm is proposed to obtain automatically paired transmit and receive angle estimations in the MIMO radar. The proposed algorithm has very close angle estimation performance to conventional PM, which has a much higher complexity than our algorithm. For high signal-to-noise ratio, the proposed algorithm has very close angle estimation to estimation of signal parameters via rotational invariance technique algorithm. The variance of the estimation error and Cramér-Rao bound of angle estimation are derived. Simulation results verify the usefulness of our algorithm.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Directory of Open Access Journals (Sweden)
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
Institute of Scientific and Technical Information of China (English)
AkiraOgawa
1999-01-01
A cyclone dust collector is applied in many industries.Especially the axial flow cyclone is the most simple construction and if keeps high reliability for maintenance.On the other hand,the collection efficiency of the cyclone depends not only on the inlet gas velocity but also on the feed particle concentration.The collection efficiency increases with increasing feed particle concentration.However until now the problem of how to estimate the collection efficiency depended on the feed particle concentration is remained except the investigation by Muschelknautz & Brunner[6],Therefore in this paper one of the estimate method for the collection efficiency of the axial flow cyclones is proposed .The application to the geometrically similar type of cyclone of the body diameters D1=30,50,69and 99mm showed in good agreement with the experimental results of the collection efficiencies which were described in detail in the paper by ogawa & Sugiyama[8].
Estimates of HVAC filtration efficiency for fine and ultrafine particles of outdoor origin
Azimi, Parham; Zhao, Dan; Stephens, Brent
2014-12-01
This work uses 194 outdoor particle size distributions (PSDs) from the literature to estimate single-pass heating, ventilating, and air-conditioning (HVAC) filter removal efficiencies for PM2.5 and ultrafine particles (UFPs: HVAC filters identified in the literature. Filters included those with a minimum efficiency reporting value (MERV) of 5, 6, 7, 8, 10, 12, 14, and 16, as well as HEPA filters. We demonstrate that although the MERV metric defined in ASHRAE Standard 52.2 does not explicitly account for UFP or PM2.5 removal efficiency, estimates of filtration efficiency for both size fractions increased with increasing MERV. Our results also indicate that outdoor PSD characteristics and assumptions for particle density and typical size-resolved infiltration factors (in the absence of HVAC filtration) do not drastically impact estimates of HVAC filter removal efficiencies for PM2.5. The impact of these factors is greater for UFPs; however, they are also somewhat predictable. Despite these findings, our results also suggest that MERV alone cannot always be used to predict UFP or PM2.5 removal efficiency given the various size-resolved removal efficiencies of different makes and models, particularly for MERV 7 and MERV 12 filters. This information improves knowledge of how the MERV designation relates to PM2.5 and UFP removal efficiency for indoor particles of outdoor origin. Results can be used to simplify indoor air quality modeling efforts and inform standards and guidelines.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Kawaguchi, Hiroyuki; Tone, Kaoru; Tsutsui, Miki
2014-06-01
The purpose of this study was to perform an interim evaluation of the policy effect of the current reform of Japan's municipal hospitals. We focused on efficiency improvements both within hospitals and within two separate internal hospital organizations. Hospitals have two heterogeneous internal organizations: the medical examination division and administration division. The administration division carries out business management and the medical-examination division provides medical care services. We employed a dynamic-network data envelopment analysis model (DN model) to perform the evaluation. The model makes it possible to simultaneously estimate both the efficiencies of separate organizations and the dynamic changes of the efficiencies. This study is the first empirical application of the DN model in the healthcare field. Results showed that the average overall efficiency obtained with the DN model was 0.854 for 2007. The dynamic change in efficiency scores from 2007 to 2009 was slightly lower. The average efficiency score was 0.862 for 2007 and 0.860 for 2009. The average estimated efficiency of the administration division decreased from 0.867 for 2007 to 0.8508 for 2009. In contrast, the average efficiency of the medical-examination division increased from 0.858 for 2007 to 0.870 for 2009. We were unable to find any significant improvement in efficiency despite the reform policy. Thus, there are no positive policy effects despite the increased financial support from the central government.
Institute of Scientific and Technical Information of China (English)
Tao Hu; Heng-jian Cui; Xing-wei Tong
2009-01-01
This article considers a semiparametric varying-coefficient partially linear regression model with current status data. The semiparametric varying-coefficient partially linear regression model which is a gen-eralization of the partially linear regression model and varying-coefficient regression model that allows one to explore the possibly nonlinear effect of a certain covariate on the response variable. A Sieve maximum likelihood estimation method is proposed and the asymptotic properties of the proposed estimators are discussed. Under some mild conditions, the estimators are shown to be strongly consistent. The convergence rate of the estima-tor for the unknown smooth function is obtained and the estimator for the unknown parameter is shown to be asymptotically efficient and normally distributed. Simulation studies are conducted to examine the small-sample properties of the proposed estimates and a real dataset is used to illustrate our approach.
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Directory of Open Access Journals (Sweden)
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
Energy Efficient Spectrum Sensing for State Estimation over A Wireless Channel
Cao, Xianghui; Zhou, Xiangwei; Cheng, Yu
2014-01-01
The performance of remote estimation over wireless channel is strongly affected by sensor data losses due to interference. Although the impact of interference can be alleviated by performing spectrum sensing and then transmitting only when the channel is clear, the introduction of spectrum sensing also incurs extra energy expenditure. In this paper, we investigate the problem of energy efficient spectrum sensing for state estimation of a general linear dynamic system, and formulate an optimiz...
Chen, Hua Yun
2009-12-01
Theory on semiparametric efficient estimation in missing data problems has been systematically developed by Robins and his coauthors. Except in relatively simple problems, semiparametric efficient scores cannot be expressed in closed forms. Instead, the efficient scores are often expressed as solutions to integral equations. Neumann series was proposed in the form of successive approximation to the efficient scores in those situations. Statistical properties of the estimator based on the Neumann series approximation are difficult to obtain and as a result, have not been clearly studied. In this paper, we reformulate the successive approximation in a simple iterative form and study the statistical properties of the estimator based on the reformulation. We show that a doubly-robust locally-efficient estimator can be obtained following the algorithm in robustifying the likelihood score. The results can be applied to, among others, the parametric regression, the marginal regression, and the Cox regression when data are subject to missing values and the missing data are missing at random. A simulation study is conducted to evaluate the performance of the approach and a real data example is analyzed to demonstrate the use of the approach.
PROGRAM FOR ESTIMATION OF ECONOMIC EFFICIENCY OF INVESTMENTS IN CAR SERVICE COMPANIES
Directory of Open Access Journals (Sweden)
Nikolaev N. N.
2015-05-01
Full Text Available The article presents a computer program for estimation of economic efficiency of investments in car service companies. Estimation of economic efficiency of investments is the most important problem on the development stage of creating the project, modernization and development of production, including car service. Car service is one of the most intensive developing sectors of Russian economics due to increasing amount of vehicles, and therefore, it is a significant part of works and services that compose the gross domestic product. At the same time, investments in companies of car service are risky, because wrong prognosis of economic efficiency can lead to unprofitability of the company or inadmissible time of investments payback. For reducing risks of incorrect investing we have designed a special computer program for estimation of economic efficiency of investments in companies of car service. It can determine different efficiency indicators and take into consideration all the different payments and required investments. The results of the program operation are outputting in forms of the informative table and the plot of the really project estimating. Visual Basic for implementation in MS Excel is the language of realization program, because it doesn’t require additional license
Directory of Open Access Journals (Sweden)
B. Bayram
2006-01-01
Full Text Available Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficient as smaller cows
Directory of Open Access Journals (Sweden)
Sobchak Andrii
2016-02-01
Full Text Available The concept of hyperstability of the cybernetic system is considered in an appendix to the task of estimation of efficiency of virtual productive enterprise functioning. The basic factors, influencing on efficiency of functioning of such enterprise are determined. The article offers the methodology of synthesis of static structure of the system of support of making decision by managers of virtual enterprise, in particular procedure of determination of numerical and high-quality strength of equipment, producible on a virtual enterprise.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Institute of Scientific and Technical Information of China (English)
CHUNG Warn-ill; CHOI Jun-ho; BAE Hae-young
2004-01-01
Many commercial database systems maintain histograms to summarize the contents of relations and permit the efficient estimation of query result sizes and the access plan cost. In spatial database systems, most spatial query predicates are consisted of topological relationships between spatial objects, and it is very important to estimate the selectivity of those predicates for spatial query optimizer. In this paper, we propose a selectivity estimation scheme for spatial topological predicates based on the multidimensional histogram and the transformation scheme. Proposed scheme applies twopartition strategy on transformed object space to generate spatial histogram and estimates the selectivity of topological predicates based on the topological characteristics of the transformed space. Proposed scheme provides a way for estimating the selectivity without too much memory space usage and additional I/Os in most spatial query optimizers.
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
Directory of Open Access Journals (Sweden)
Archana V
2014-05-01
Full Text Available Co-efficient of variation is a unitless measure of dispersion and is very frequently used in scientific investigations. This has motivated several researchers to propose estimators and tests concerning the co-efficient of variation of normal distribution(s. While proposing a class of estimators for the co-efficient of variation of a finite population, Tripathi et al., (2002 suggested that the estimator of co-efficient of variation of a finite population can also be used as an estimator of C.V for any distribution when the sampling design is SRSWR. This has motivated us to propose 28 estimators of finite population co-efficient of variation as estimators of co-efficient of variation of one component of a bivariate normal distribution when prior information is available regarding the second component. Cramer Rao type lower bound is derived to the mean square error of these estimators. Extensive simulation is carried out to compare these estimators. The results indicate that out of these 28 estimators, eight estimators have larger relative efficiency compared to the sample co-efficient of variation. The asymptotic mean square errors of the best estimators are derived to the order of for the benefit of users of co-efficient of variation.
Directory of Open Access Journals (Sweden)
Robertson Patrick
2010-01-01
Full Text Available Multipath is today still one of the most critical problems in satellite navigation, in particular in urban environments, where the received navigation signals can be affected by blockage, shadowing, and multipath reception. Latest multipath mitigation algorithms are based on the concept of sequential Bayesian estimation and improve the receiver performance by exploiting the temporal constraints of the channel dynamics. In this paper, we specifically address the problem of estimating and adjusting the number of multipath replicas that is considered by the receiver algorithm. An efficient implementation via a two-fold marginalized Bayesian filter is presented, in which a particle filter, grid-based filters, and Kalman filters are suitably combined in order to mitigate the multipath channel by efficiently estimating its time-variant parameters in a track-before-detect fashion. Results based on an experimentally derived set of channel data corresponding to a typical urban propagation environment are used to confirm the benefit of our novel approach.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
An efficient anti-occlusion depth estimation using generalized EPI representation in light field
Zhu, Hao; Wang, Qing
2016-10-01
Light field cameras have been rapidly developed and are likely to appear in mobile devices in near future. It is essential to develop efficient and robust depth estimation algorithm for mobile applications. However, existing methods are either slow or lack of adaptability to occlusion such that they are not suitable to mobile computing platform. In this paper, we present the generalized EPI representation in light field and formulate it using two linear functions. By combining it with the light field occlusion theory, a highly efficient and anti-occlusion depth estimation algorithm is proposed. Our algorithm outperforms the previous local method, especially in occlusion areas. Experimental results on public light field datasets have demonstrated the effectiveness and efficiency of the proposed algorithm.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali
2016-11-08
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
On the efficiency and reliability of cluster mass estimates based on member galaxies
Biviano, A; Diaferio, A; Dolag, K; Girardi, M; Murante, G
2006-01-01
We study the efficiency and reliability of cluster mass estimators that are based on the projected phase-space distribution of galaxies in a cluster region. To this aim, we analyse a data-set of 62 clusters extracted from a concordance LCDM cosmological hydrodynamical simulation. Galaxies (or Dark Matter particles) are first selected in cylinders of given radius (from 0.5 to 1.5 Mpc/h) and ~200 Mpc/h length. Cluster members are then identified by applying a suitable interloper removal algorithm. Two cluster mass estimators are considered: the virial mass estimator (Mvir), and a mass estimator (Msigma) based entirely on the cluster velocity dispersion estimate. Mvir overestimates the true mass by ~10%, and Msigma underestimates the true mass by ~15%, on average, for sample sizes of > 60 cluster members. For smaller sample sizes, the bias of the virial mass estimator substantially increases, while the Msigma estimator becomes essentially unbiased. The dispersion of both mass estimates increases by a factor ~2 a...
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Archana V; Aruna Rao K
2014-01-01
Co-efficient of variation is a unitless measure of dispersion and is very frequently used in scientific investigations. This has motivated several researchers to propose estimators and tests concerning the co-efficient of variation of normal distribution(s). While proposing a class of estimators for the co-efficient of variation of a finite population, Tripathi et al., (2002) suggested that the estimator of co-efficient of variation of a finite population can also be used as an estimator of C...
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Directory of Open Access Journals (Sweden)
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas. The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling.......Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...
Program Potential: Estimates of Federal Energy Cost Savings from Energy Efficient Procurement
Energy Technology Data Exchange (ETDEWEB)
Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-09-17
In 2011, energy used by federal buildings cost approximately $7 billion. Reducing federal energy use could help address several important national policy goals, including: (1) increased energy security; (2) lowered emissions of greenhouse gases and other air pollutants; (3) increased return on taxpayer dollars; and (4) increased private sector innovation in energy efficient technologies. This report estimates the impact of efficient product procurement on reducing the amount of wasted energy (and, therefore, wasted money) associated with federal buildings, as well as on reducing the needless greenhouse gas emissions associated with these buildings.
Energy Technology Data Exchange (ETDEWEB)
Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)
2001-03-01
This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.
Directory of Open Access Journals (Sweden)
Feklistova Inessa
2016-02-01
Full Text Available The article presents methodical approach to the estimation of strategic man-agement efficiency of enterprises of the region with the use of cluster analysis, realized by means of the specially worked out application package. The necessity of its application in the analytical work of economic services of the region enterprises has been proved. It will allow to improve the quality of monitoring, and scientifically substantiate strategic administrative decisions
Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision
Directory of Open Access Journals (Sweden)
Xiang Wu
2013-01-01
Full Text Available In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Technical and Scale Efficiency in Spanish Urban Transport: Estimating with Data Envelopment Analysis
Directory of Open Access Journals (Sweden)
I. M. García Sánchez
2009-01-01
Full Text Available The paper undertakes a comparative efficiency analysis of public bus transport in Spain using Data Envelopment Analysis. A procedure for efficiency evaluation was established with a view to estimating its technical and scale efficiency. Principal components analysis allowed us to reduce a large number of potential measures of supply- and demand-side and quality outputs in three statistical factors assumed in the analysis of the service. A statistical analysis (Tobit regression shows that efficiency levels are negative in relation to the population density and peak-to-base ratio. Nevertheless, efficiency levels are not related to the form of ownership (public versus private. The results obtained for Spanish public transport show that the average pure technical and scale efficiencies are situated at 94.91 and 52.02%, respectively. The excess of resources is around 6%, and the increase in accessibility of the service, one of the principal components summarizing the large number of output measures, is extremely important as a quality parameter in its performance.
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set.
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
and by reduced outputs, we estimate hyperbolic distance functions that account for reduced technical efficiency both in terms of increased inputs and reduced outputs. We estimate these hyperbolic distance functions as “efficiency effect frontiers” with the Translog functional form and a dynamic specification...
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Estimation of light use efficiency for the prediction of forest productivity from remote sensing
Raddi, Sabrina; Magnani, Federico; Pippi, Ivan
2002-06-01
The gross primary productivity of natural ecosystems (GPP) can be expressed as the product of the amount of radiation absorbed by the green canopy (APAR) by the radiation use efficiency (). The fraction of incoming radiation that is intercepted by the canopy can be readily determined from remotely sensed data by means of spectral indexes such as NDVI (Normalized Difference Vegetation Index; Waring &Running 1998; Raddi, Magnani &Pippi 1998). Light use efficiency, however, is also known to be highly variable among species and as a result of environmental conditions. A proper determination of ɛ is therefore a key precondition for the realistic assessment of ecosystem productivity. Several different approaches have been proposed over the years to estimate light use efficiency. Because of the relationship between protein content and leaf photosynthetic potential, the remote sensing of foliar nitrogen content has been applied to estimate maximum assimilation rates, as an input for ecosystem models. Chlorophyll content, which can be more easily determined in the visible range, is also often used as a proxy for nitrogen concentration. This approach takes into account only the effects of soil fertility on ɛ. In contrast, the effects on ɛ of microclimatic factors can be estimated from complex forest ecosystem models, driven by records of local environmental conditions and species-specific parameters.In order to estimate regional productivity from RS data, models can be run for each pixel of interest, or they can be applied over a limited number of representative areas to obtain a robust empirical relationship between ɛ and key environmental variables. Finally, foliar photosynthesis can be directly estimated from leaf reflectance in the blue-green region, through indexes such as the Photochemical Reflectance Index (PRI; Gamon, Penuelas &Field 1992). The index has a clear functional basis, because of the well-known correlation between nonphotochemical quenching of absorbed
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)
2009-02-23
Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.
Computationally efficient permutation-based confidence interval estimation for tail-area FDR
Directory of Open Access Journals (Sweden)
Joshua eMillstein
2013-09-01
Full Text Available Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 hour REM sleep (FDR = .15 (.08, .26. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.
Efficient PU Mode Decision and Motion Estimation for H.264/AVC to HEVC Transcoder
Directory of Open Access Journals (Sweden)
Zong-Yi Chen
2014-04-01
Full Text Available H.264/AVC has been widely applied to various applications. However, a new video compression standard, High Efficient Video Coding (HEVC, had been finalized in 2013. In this work, a fast transcoder from H.264/AVC to HEVC is proposed. The proposed algorithm includes the fast prediction unit (PU decision and the fast motion estimation. With the strong relation between H.264/AVC and HEVC, the modes, residuals, and variance of motion vectors (MVs extracted from H.264/AVC can be reused to predict the current encoding PU of HEVC. Furthermore, the MVs from H.264/AVC are used to decide the search range of PU during motion estimation. Simulation results show that the proposed algorithm can save up to 53% of the encoding time and maintains the rate-distortion (R-D performance for HEVC.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient Transmit Beamspace Design for Search-Free Based DOA Estimation in MIMO Radar
Khabbazibasmenj, Arash; Hassanien, Aboulnasr; Vorobyov, Sergiy A.; Morency, Matthew W.
2014-03-01
In this paper, we address the problem of transmit beamspace design for multiple-input multiple-output (MIMO) radar with colocated antennas in application to direction-of-arrival (DOA) estimation. A new method for designing the transmit beamspace matrix that enables the use of search-free DOA estimation techniques at the receiver is introduced. The essence of the proposed method is to design the transmit beamspace matrix based on minimizing the difference between a desired transmit beampattern and the actual one under the constraint of uniform power distribution across the transmit array elements. The desired transmit beampattern can be of arbitrary shape and is allowed to consist of one or more spatial sectors. The number of transmit waveforms is even but otherwise arbitrary. To allow for simple search-free DOA estimation algorithms at the receive array, the rotational invariance property is established at the transmit array by imposing a specific structure on the beamspace matrix. Semi-definite relaxation is used to transform the proposed formulation into a convex problem that can be solved efficiently. We also propose a spatial-division based design (SDD) by dividing the spatial domain into several subsectors and assigning a subset of the transmit beams to each subsector. The transmit beams associated with each subsector are designed separately. Simulation results demonstrate the improvement in the DOA estimation performance offered by using the proposed joint and SDD transmit beamspace design methods as compared to the traditional MIMO radar technique.
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Directory of Open Access Journals (Sweden)
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Boschetti, Mirco; Mauri, Emanuela; Gadda, Chiara; Busetto, Lorenzo; Confalonieri, Roberto; Bocchi, Stefano; Brivio, Pietro A.
2004-10-01
Rice is one of the most important crops in the whole world, providing staple food for more than 3000 million people. For this reason FAO declared the year 2004 as The International Year of Rice promoting initiatives and researches on this valuable crop. Assessing the Net Primary Production (NPP) is fundamental to support a sustainable development and to give crop yield forecast essential to food security policy. Crop growth models can be useful tools for estimating growth, development and yield but require complex spatial distributed input parameters to produce valuable map. Light use efficiency (LUE) models, using satellite-borne data to achieve daily surface parameters, represent an alternative approach able to monitor differences in vegetation compound providing spatial distributed NPP maps. An experiment aimed at testing the capability of a LUE model using daily MODIS data to estimate rice crop production was conducted in a rice area of Northern Italy. Direct LAI measurements and indirect LAI2000 estimation were collected on different fields during the growing season to define a relationship with MODIS data. An hyperspectral MIVIS image was acquired in early July on the experimental site to provide high spatial resolution information on land cover distribution. LUE-NPP estimations on several fields were compared with CropSyst model outputs and field biomass measurements. A comparison of different methods performance is presented and relative advantages and drawbacks in spatialization are discussed.
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina;
2012-01-01
-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...... and inside the scanner. (c) 2012 Wiley Periodicals, Inc. Concepts Magn Reson Part B (Magn Reson Engineering) 41B: 139143, 2012...
Estimation of Power/Energy Losses in Electric Distribution Systems based on an Efficient Method
Directory of Open Access Journals (Sweden)
Gheorghe Grigoras
2013-09-01
Full Text Available Estimation of the power/energy losses constitutes an important tool for an efficient planning and operation of electric distribution systems, especially in a free energy market environment. For further development of plans of energy loss reduction and for determination of the implementation priorities of different measures and investment projects, analysis of the nature and reasons of losses in the system and in its different parts is needed. In the paper, an efficient method concerning the power flow problem of medium voltage distribution networks, under condition of lack of information about the nodal loads, is presented. Using this method it can obtain the power/energy losses in power transformers and the lines. The test results, obtained for a 20 kV real distribution network from Romania, confirmed the validity of the proposed method.
SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop
Zeyliger, Anatoly; Ermolaeva, Olga
2013-04-01
The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Energy efficiency estimation of a steam powered LNG tanker using normal operating data
Directory of Open Access Journals (Sweden)
Sinha Rajendra Prasad
2016-01-01
Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is
An Efficient Estimation of Distribution Algorithm for Job Shop Scheduling Problem
He, Xiao-Juan; Zeng, Jian-Chao; Xue, Song-Dong; Wang, Li-Fang
An estimation of distribution algorithm with probability model based on permutation information of neighboring operations for job shop scheduling problem was proposed. The probability model was given using frequency information of pair-wise operations neighboring. Then the structure of optimal individual was marked and the operations of optimal individual were partitioned to some independent sub-blocks. To avoid repeating search in same area and improve search speed, each sub-block was taken as a whole to be adjusted. Also, stochastic adjustment to the operations within each sub-block was introduced to enhance the local search ability. The experimental results show that the proposed algorithm is more robust and efficient.
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
on the system memory. Consequently, high-dimensional problems can be handled, and nonlinearities in the model neither bring any difficulty in applying it nor lead to considerable reduction of its efficiency. These characteristics suggest that the method is a powerful candidate for complicated problems. First......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
Directory of Open Access Journals (Sweden)
José A. Adell
2009-01-01
Full Text Available We give efficient algorithms, as well as sharp estimates, to compute the Kolmogorov distance between the binomial and Poisson laws with the same mean λ. Such a distance is eventually attained at the integer part of λ+1/2−λ+1/4. The exact Kolmogorov distance for λ≤2−2 is also provided. The preceding results are obtained as a concrete application of a general method involving a differential calculus for linear operators represented by stochastic processes.
A geostatistical approach to estimate mining efficiency indicators with flexible meshes
Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2014-05-01
Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Directory of Open Access Journals (Sweden)
Robert Aidoo
2012-06-01
Full Text Available The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain.A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency analyses were performed using the field data. There was a long chain of greater than three channels through which yams moved from the producer to the final consumer. Yam marketing was found to be a profitable venture for all the key players in the yam marketing chain.Net marketing margin of about GH¢15.52 (US$9.13 was obtained when the farmer himself sold 100tubers of yams in the market rather than at the farm gate.The net marketing margin obtained by wholesalers was estimated at GH¢27.39 per 100tubers of yam sold, which was equivalent to about 61% of the gross margin obtained.Net marketing margin for retailers was estimated at GH¢15.37, representing 61% of the gross margin obtained.A net marketing margin of GH¢33.91 was obtained for every 100tubers of yam transported across Ghana’s borders by cross-border traders. Generally, the study found out that net marketing margin was highest for cross-border yam traders, followed by wholesalers. Yam marketing activities among retailers, wholesalers and cross-border traders were found to be highly efficient with efficiency ratios in excess of 100%. However, yam marketing among producer-sellers was found to be inefficient with efficiency ratio of about 86%.The study recommended policies and strategies to be adopted by central and local government authorities to address key constraints such as poor road network, limited financial resources, poor storage facilities and high cost of transportation that serve as
Directory of Open Access Journals (Sweden)
Angela Shirley
2014-01-01
Full Text Available To achieve a more efficient use of auxiliary information we propose single-parameter ratio/product-cum-mean-per-unit estimators for a finite population mean in a simple random sample without replacement when the magnitude of the correlation coefficient is not very high (less than or equal to 0.7. The first order large sample approximation to the bias and the mean square error of our proposed estimators are obtained. We use simulation to compare our estimators with the well-known sample mean, ratio, and product estimators, as well as the classical linear regression estimator for efficient use of auxiliary information. The results are conforming to our motivating aim behind our proposition.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-07
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Energy Technology Data Exchange (ETDEWEB)
Brabec, Jiri; Lin, Lin; Shao, Meiyue; Govind, Niranjan; Yang, Chao; Saad, Yousef; Ng, Esmond
2015-10-06
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate the density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.
Directory of Open Access Journals (Sweden)
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
2011-01-01
Verification results of combustion action simulating and estimate of calculation combustion efficiency that was given by simulating were shown. Mathematical model and its assumption are described. Execution calculations method was shown. Results of simulating are shown; their comparative analyses with results of experiment were executed. Accuracy of combustion action mathematical modeling by combustion efficiency in model with oneand two-stage reactions of combustion was estimated. The infere...
Selva, J
2011-01-01
This paper presents an efficient method to compute the maximum likelihood (ML) estimation of the parameters of a complex 2-D sinusoidal, with the complexity order of the FFT. The method is based on an accurate barycentric formula for interpolating band-limited signals, and on the fact that the ML cost function can be viewed as a signal of this type, if the time and frequency variables are switched. The method consists in first computing the DFT of the data samples, and then locating the maximum of the cost function by means of Newton's algorithm. The fact is that the complexity of the latter step is small and independent of the data size, since it makes use of the barycentric formula for obtaining the values of the cost function and its derivatives. Thus, the total complexity order is that of the FFT. The method is validated in a numerical example.
Directory of Open Access Journals (Sweden)
Makram Krit
2012-06-01
Full Text Available Purpose: Estimate the maintenance efficiency in the Brown-Proschan model with the bathtub failure intensity.Design/methodology/approach: Empirical research through which we propose a framework to establish the characteristics of failure process and its influence on maintenance process.Findings: The main contribution of the present study is the reformulation of the Brown and Proschan model using the bathtub failure intensityPractical implications: Our model is defined by BP reformulation one using bathtub failure intensity. This form of intensity is presented like superposition of two NHPP and Homogeneous Poisson one.Originality/value: This is the follow on research on the study that employed the power-law-process type of failure intensity.
Improved barometric and loading efficiency estimates using packers in monitoring wells
Cook, Scott B.; Timms, Wendy A.; Kelly, Bryce F. J.; Barbour, S. Lee
2017-02-01
Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility (α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage (S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity (K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high-K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low-K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.
Mökkönen, Harri; Jónsson, Hannes
2016-01-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates i...
Indian Academy of Sciences (India)
Nicolle V Sydney; Emygdio La Monteiro-Filho
2011-03-01
Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Directory of Open Access Journals (Sweden)
David Simoncini
Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Analytical estimates of efficiency of attractor neural networks with inborn connections
Directory of Open Access Journals (Sweden)
Solovyeva Ksenia
2016-01-01
Full Text Available The analysis is restricted to the features of neural networks endowed to the latter by the inborn (not learned connections. We study attractor neural networks in which for almost all operation time the activity resides in close vicinity of a relatively small number of attractor states. The number of the latter, M, is proportional to the number of neurons in the neural network, N, while the total number of the states in it is 2N. The unified procedure of growth/fabrication of neural networks with sets of all attractor states with dimensionality d=0 and d=1, based on model molecular markers, is studied in detail. The specificity of the networks (d=0 or d=1 depends on topology (i.e., the set of distances between elements which can be provided to the set of molecular markers by their physical nature. The neural networks parameters estimates and trade-offs for them in attractor neural networks are calculated analytically. The proposed mechanisms reveal simple and efficient ways of implementation in artificial as well as in natural neural networks of multiplexity, i.e. of using activity of single neurons in representation of multiple values of the variables, which are operated by the neural systems. It is discussed how the neuronal multiplexity provides efficient and reliable ways of performing functional operations in the neural systems.
Kato, M
1999-01-01
We have calculated the mass accumulation efficiency during helium shell flashes to examine whether or not a carbon-oxygen white dwarf (C+O WD) grows up to the Chandrasekhar mass limit to ignite a Type Ia supernova explosion. It has been frequently argued that luminous super-soft X-ray sources and symbiotic stars are progenitors of SNe Ia. In such systems, a C+O WD accretes hydrogen-rich matter from a companion and burns hydrogen steadily on its surface. The WD develops a helium layer underneath the hydrogen-rich envelope and undergoes periodic helium shell flashes. Using OPAL opacity, we have reanalyzed a full cycle of helium shell flashes on a 1.3 M_ødot C+O WD and confirmed that the helium envelope of the WD expands to blow a strong wind. A part of the accumulated matter is lost by the wind. The mass accumulation efficiency is estimated as \\eta_{He} = -0.175 (\\log accretion rate \\dot M is in units of M_ødot yr^{-1}. In relatively high mass accretion rates as expected in recent SN Ia progenitor models, the...
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Directory of Open Access Journals (Sweden)
Hossein Jafari Mansoorian
2017-01-01
Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.
Directory of Open Access Journals (Sweden)
Kazuki Maruta
2016-07-01
Full Text Available Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G. One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
Maruta, Kazuki; Iwakuni, Tatsuhiko; Ohta, Atsushi; Arai, Takuto; Shirato, Yushi; Kurosaki, Satoshi; Iizuka, Masataka
2016-07-08
Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G). One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO) can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS) dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI) estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
A Family of Computationally Efficient and Simple Estimators for Unnormalized Statistical Models
Pihlaja, Miika; Hyvarinen, Aapo
2012-01-01
We introduce a new family of estimators for unnormalized statistical models. Our family of estimators is parameterized by two nonlinear functions and uses a single sample from an auxiliary distribution, generalizing Maximum Likelihood Monte Carlo estimation of Geyer and Thompson (1992). The family is such that we can estimate the partition function like any other parameter in the model. The estimation is done by optimizing an algebraically simple, well defined objective function, which allows for the use of dedicated optimization methods. We establish consistency of the estimator family and give an expression for the asymptotic covariance matrix, which enables us to further analyze the influence of the nonlinearities and the auxiliary density on estimation performance. Some estimators in our family are particularly stable for a wide range of auxiliary densities. Interestingly, a specific choice of the nonlinearity establishes a connection between density estimation and classification by nonlinear logistic reg...
Ma, Shao-Qiang; Zhu, Han-Jie; Zhang, Guo-Feng
2017-04-01
The effects of different quantum feedback types on the estimation precision of the detection efficiency are studied. It is found that the precision can be more effective enhanced by a certain feedback type through comparing these feedbacks and the precision has a positive relation with detection efficiency for the optimal feedback when the system reach the state of dynamic balance. In addition, the bigger the proportion of |1> is the higher the precision is and we will not obtain any information about the parameter to be estimated if |0> is chosen as initial state for the feedback type λσz.
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
The efficiency of modified jackknife and ridge type regression estimators: a comparison
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2008-09-01
Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...
Efficient estimation of time-mean states of ocean models using 4D-Var and implicit time-stepping
Terwisscha van Scheltinga, A.D.; Dijkstra, H.A.
2007-01-01
We propose an efficient method for estimating a time-mean state of an ocean model subject to given observations using implicit time-stepping. The new method uses (i) an implicit implementation of the 4D-Var method to fit the model trajectory to the observations, and (ii) a preprocessor which applies
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Chaves, A S; Nascimento, M L; Tullio, R R; Rosa, A N; Alencar, M M; Lanna, D P
2015-10-01
The objective of this study was to examine the relationship of efficiency indices with performance, heart rate, oxygen consumption, blood parameters, and estimated heat production (EHP) in Nellore steers. Eighteen steers were individually lot-fed diets of 2.7 Mcal ME/kg DM for 84 d. Estimated heat production was determined using oxygen pulse (OP) methodology, in which heart rate (HR) was monitored for 4 consecutive days. Oxygen pulse was obtained by simultaneously measuring HR and oxygen consumption during a 10- to 15-min period. Efficiency traits studied were feed efficiency (G:F) and residual feed intake (RFI) obtained by regression of DMI in relation to ADG and midtest metabolic BW (RFI). Alternatively, RFI was also obtained based on equations reported by the NRC's to estimate individual requirement and DMI (RFI calculated by the NRC [1996] equation [RFI]). The slope of the regression equation and its significance was used to evaluate the effect of efficiency indices (RFI, RFI, or G:F) on the traits studied. A mixed model was used considering RFI, RFI, or G:F and pen type as fixed effects and initial age as a covariate. For HR and EHP variables, day was included as a random effect. There was no relationship between efficiency indices and back fat depth measured by ultrasound or daily HR and EHP ( > 0.05). Because G:F is obtained in relation to BW, the slope of G:F was positive and significant ( consumption per beat was not related to G:F; however, it was lower for RFI- and RFI-efficient steers, and consequently, oxygen volume (mL·min·kg) and OP (μL O·beat·kg) were also lower ( 0.05); however, G:F-efficient steers showed lower hematocrit and hemoglobin concentrations ( consumption and OP were detected, indicating that the OP methodology may be useful to predict growth efficiency.
Directory of Open Access Journals (Sweden)
Roliana Ibrahim
2012-09-01
Full Text Available Development effort is an undeniable part of the project management which considerably influences the success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project. Due to the special specifications, accurate estimation of effort in the software projects is a vital management activity that must be carefully done to avoid from the unforeseen results. However numerouseffort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and the attempts continue to improve the performance of estimation methods. Prior researches conducted in this area have focused on numerical and quantitative approaches and there are a few research works that investigate the root problems and issues behind the inaccurate effort estimation of software development effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in terms of effort estimation. The proposed framework includes various indicators which cover the critical issues in field of software development effort estimation. Since the capabilities and shortages of organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic approach in which the strengths and weaknesses of organizations in field of effort estimation are discovered
Horton, G.E.; Dubreuil, T.L.; Letcher, B.H.
2007-01-01
Our goal was to understand movement and its interaction with survival for populations of stream salmonids at long-term study sites in the northeastern United States by employing passive integrated transponder (PIT) tags and associated technology. Although our PIT tag antenna arrays spanned the stream channel (at most flows) and were continuously operated, we are aware that aspects of fish behavior, environmental characteristics, and electronic limitations influenced our ability to detect 100% of the emigration from our stream site. Therefore, we required antenna efficiency estimates to adjust observed emigration rates. We obtained such estimates by testing a full-scale physical model of our PIT tag antenna array in a laboratory setting. From the physical model, we developed a statistical model that we used to predict efficiency in the field. The factors most important for predicting efficiency were external radio frequency signal and tag type. For most sampling intervals, there was concordance between the predicted and observed efficiencies, which allowed us to estimate the true emigration rate for our field populations of tagged salmonids. One caveat is that the model's utility may depend on its ability to characterize external radio frequency signals accurately. Another important consideration is the trade-off between the volume of data necessary to model efficiency accurately and the difficulty of storing and manipulating large amounts of data.
Gibiansky, Leonid; Gibiansky, Ekaterina; Bauer, Robert
2012-02-01
The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.
Cerasoli, S.; Silva, J. M.; Carvalhais, N.; Correia, A.; Costa e Silva, F.; Pereira, J. S.
2013-12-01
The Light Use Efficiency (LUE) concept is usually applied to retrieve Gross Primary Productivity (GPP) estimates in models integrating spectral indexes, namely Normalized Difference Vegetation Index (NDVI) and Photochemical Reflectance Index (PRI), considered proxies of biophysical properties of vegetation. The integration of spectral measurements into LUE models can increase the robustness of GPP estimates by optimizing particular parameters of the model. NDVI and PRI are frequently obtained by broad band sensors on remote platforms at low spatial resolution (e.g. MODIS). In highly heterogeneous ecosystems such spectral information may not be representative of the dynamic response of the ecosystem to climate variables. In Mediterranean oak woodlands different plant functional types (PFT): trees canopy, shrubs and herbaceous layer, contribute to the overall Gross Primary Productivity (GPP). In situ spectral measurements can provide useful information on each PFT and its temporal variability. The objectives of this study were: i) to analyze the temporal variability of NDVI, PRI and others spectral indices for the three PFT, their response to climate variables and their relationship with biophysical properties of vegetation; ii) to optimize a LUE model integrating selected spectral indexes in which the contribution of each PFT to the overall GPP is estimated individually; iii) to compare the performance of disaggregated GPP estimates and lumped GPP estimates, evaluated against eddy covariance measurements. Ground measurements of vegetation reflectance were performed in a cork oak woodland located in Coruche, Portugal (39°8'N, 8°19'W) where carbon and water fluxes are continuously measured by eddy covariance. Between April 2011 and June 2013 reflectance measurements of the herbaceous layer, shrubs and trees canopy were acquired with a FieldSpec3 spectroradiometer (ASD Inc.) which provided data in the range of 350-2500nm. Measurements were repeated approximately on
SIMPLE AND EFFICIENT SPACE-TIME CHANNEL AND DOA ESTIMATION TECHNIQUES IN TD-SCDMA SYSTEMS
Institute of Scientific and Technical Information of China (English)
Li Ping'an; Ma Ning
2006-01-01
In this paper, a simple method is presented for multi-user space-time channel estimation in Time Division-Synchronized Code Division Multiple Access (TD-SCDMA) systems. The method is based on a specific midamble assignment strategy, which results in a cyclic Toeplitz midamble-matrix in the linear equation of the received data vectors. A Fast Fourier Transform (FFT)-based algorithm is used to obtain the estimate of the uplink multi-user space-time channels. Furthermore, the estimated space-time channel is applied to the identification of multi-paths for each user, and Direction Of Arrival (DOA) estimation for each path is carried out by using the extracted spatial signature vector. Aside from the simplicity in computation, the proposed direction of arrival estimation method can effectively resolve multi-paths regardless of the correlation and angle separations of the multi-paths.
Institute of Scientific and Technical Information of China (English)
FAN JianQing; ZHOU Yong; CAI JianWen; CHEN Min
2009-01-01
Multivariate failure time data arise frequently in survival analysis. A commonly used technique is the working independence estimator for marginal hazard models. Two natural questions are how to improve the efficiency of the working independence estimator and how to identify the situations under which such an estimator has high statistical efficiency. In this paper, three weighted estimators are proposed based on three different optimal criteria in terms of the asymptotic covariance of weighted estimators. Simplified close-form solutions are found, which always outperform the working independence estimator. We also prove that the working independence estimator has high statistical efficiency,when asymptotic covariance of derivatives of partial log-likelihood functions is nearly exchangeable or diagonal. Simulations are conducted to compare the performance of the weighted estimator and working independence estimator. A data set from Busselton population health surveys is analyzed using the proposed estimators.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources
Directory of Open Access Journals (Sweden)
Dayan A. Guimarães
2014-10-01
Full Text Available We propose a simple algorithm for improving the MDL (minimum description length estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion estimator and with a recently-proposed estimator based on the random matrix theory (RMT. It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations.
Using Data Envelopment Analysis approach to estimate the health production efficiencies in China
Institute of Scientific and Technical Information of China (English)
ZHANG Ning; HU Angang; ZHENG Jinghai
2007-01-01
By using Data Envelopment Analysis approach,we treat the health production system in a certain province as a Decision Making Unit (DMU),identify its inputs and outputs,evaluate its technical efficiency in 1982,1990 and 2000 respectively,and further analyze the relationship between efficiency scores and social-environmental variables.This paper has found several interesting findings.Firstly,provinces on frontier in different year are different,but provinces far from the frontier keep unchanged.The average efficiency of health production has made a significant progress from 1982 to 2000.Secondly,all provinces in China can be divided into six categories in terms of health production outcome and efficiency,and each category has specific approach of improving health production efficiency.Thirdly,significant differences in health production efficiencies have been found among the eastern,middle and western regions in China,and among the eastern and middle regions.At last,there is significant positive relationship between population density and health production efficiency but negative relationship (not very significant) between the proportions of public health expenditure in total expense and efficiency.Maybe it is the result of inappropriate tendency of public expenditure.The relationship between abilities to pay for health care services and efficiency in urban areas is opposite to that in rural areas.One possible reason is the totally different income and public services treatments between rural and urban residents.Therefore,it is necessary to adjust health policies and service provisions which are specifically designed to different population groups.
An Efficient Algorithm for Contact Angle Estimation in Molecular Dynamics Simulations
Sumith YD
2015-01-01
It is important to find contact angle for a liquid to understand its wetting properties, capillarity and surface interaction energy with a surface. The estimation of contact angle from Non Equilibrium Molecular Dynamics (NEMD), where we need to track the changes in contact angle over a period of time is challenging compared to the estimation from a single image from an experimental measurement. Often such molecular simulations involve finite number of molecules above some metallic or non-meta...
Ogawa, Akira; Iwanami, Tetzuya; Shono, Hideki
1997-03-01
In order to estimate the cut-size Xc and the mechanically balanced particles in the axial flow cyclone with the slit-separation method, the tangential velocity distributions were calculated by the finite difference method. In comparison of the calculated results of the total collection efficiency with the experimental results, the calculated results showed a little higher than the experimental results due to the re-entrainment of the collected particles by turbulence. The effect of the slit for promoting the collection efficiency was not recognized.
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Estimation of Margins and Efficiency in the Ghanaian Yam Marketing Chain
Robert Aidoo; Fred Nimoh; John-Eudes Andivi Bakang; Kwasi Ohene-Yankyera; Simon Cudjoe Fialor; James Osei Mensah; Robert Clement Abaidoo
2012-01-01
The main objective of the paper was to examine the costs, returns and efficiency levels obtained by key players in the Ghanaian yam marketing chain. A total of 320 players/actors (farmers, wholesalers, retailers and cross-border traders) in the Ghanaian yam industry were selected from four districts (Techiman, Atebubu, Ejura-Sekyedumasi and Nkwanta) through a multi-stage sampling approach for the study. In addition to descriptive statistics, gross margin, net margin and marketing efficiency a...
Tetrastyryl-BODIPY-based dendritic light harvester and estimation of energy transfer efficiency.
Kostereli, Ziya; Ozdemir, Tugba; Buyukcakir, Onur; Akkaya, Engin U
2012-07-20
Versatile BODIPY dyes can be transformed into bright near-IR-emitting fluorophores by quadruple styryl substitutions. When clickable functionalities on the styryl moieties are inserted, an efficient synthesis of a light harvester is possible. In addition, clear spectral evidence is presented showing that, in dendritic light harvesters, calculations commonly based on quantum yield or emission lifetime changes of the donor are bound to yield large overestimations of energy transfer efficiency.
Directory of Open Access Journals (Sweden)
Pin-Chih Wang
2014-09-01
Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.
Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation
Directory of Open Access Journals (Sweden)
M. Agatonović
2012-12-01
Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.
Directory of Open Access Journals (Sweden)
D. O. Fuller
2012-11-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based carbon dioxide eddy covariance (EC systems are installed in only a few mangrove forests worldwide and the longest EC record from the Florida Everglades contains less than 9 yr of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger-scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE and we present the first-ever tower-based estimates of mangrove forest RE derived from night-time CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increases in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Efficient focusing scheme for transverse velocity estimation using cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
The blood velocity can be estimated by cross-correlation of received RE signals, but only the velocity component along the beam direction is found. A previous paper showed that the complete velocity vector can be estimated, if received signals are focused along lines parallel to the direction...... simulations with Field II. A 64-elements, 5 MHz linear array was used. A parabolic velocity profile with a peak velocity of 0.5 m/s was considered for different angles between the flow and the ultrasound beam and for different emit foci. At 60 degrees the relative standard deviation was 0.58 % for a transmit...
Institute of Scientific and Technical Information of China (English)
QU Annie; XUE Lan
2009-01-01
@@ In the analysis of correlated data, it is ideal to capture the true dependence structure to increase efficiency of the estimation. However, for multivariate survival data, this is extremely challenge since the martingale residual is involved and often intractable. Fan et al. have made a significant contribution by giving a close-form formula for the optimal weights of the estimating functions such that the asymptotic variance of the estimator is minimized. Since minimizing the variance matrix is not an easy task, several strategies are proposed, such as minimizing the total variance.The most feasible one is to use the diagonal matrix entries as the weighting scheme. We congratulate them on this important work. In the following we discuss implementing of their method and relate our work to theirs.
Directory of Open Access Journals (Sweden)
Toly Chen
2014-08-01
Full Text Available Cycle time management plays an important role in improving the performance of a wafer fabrication factory. It starts from the estimation of the cycle time of each job in the wafer fabrication factory. Although this topic has been widely investigated, several issues still need to be addressed, such as how to classify jobs suitable for the same estimation mechanism into the same group. In contrast, in most existing methods, jobs are classified according to their attributes. However, the differences between the attributes of two jobs may not be reflected on their cycle times. The bi-objective nature of classification and regression tree (CART makes it especially suitable for tackling this problem. However, in CART, the cycle times of jobs of a branch are estimated with the same value, which is far from accurate. For these reason, this study proposes a joint use of principal component analysis (PCA, CART, and back propagation network (BPN, in which PCA is applied to construct a series of linear combinations of original variables to form new variables that are as unrelated to each other as possible. According to the new variables, jobs are classified using CART before estimating their cycle times with BPNs. A real case was used to evaluate the effectiveness of the proposed methodology. The experimental results supported the superiority of the proposed methodology over some existing methods. In addition, the managerial implications of the proposed methodology are also discussed with an example.
Jiang, George J.; Sluis, Pieter J. van der
1999-01-01
While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using th
Budka, Marcin; Gabrys, Bogdan
2013-01-01
Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.
Direct and efficient stereological estimation of total cell quantities using electron microscopy
DEFF Research Database (Denmark)
Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2006-01-01
and local stereological probes through arbitrarily fixed points for estimation of total quantities inside cells are presented. The quantities comprise (total) number, length, surface area, volume or 3D spatial distribution for organelles as well as total amount of gold particles, various compounds...
Directory of Open Access Journals (Sweden)
Northcutt Sally L
2010-04-01
Full Text Available Abstract Background Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. Conclusions This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
DEFF Research Database (Denmark)
Madsen, U.; Aubertin, G.; Breum, N. O.;
Numerical modelling of direct capture efficiency of a local exhaust is used to compare the tracer gas technique of a proposed CEN standard against a more consistent approach based on an imaginary control box. It is concluded that the tracer gas technique is useful for field applications....
Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels
Wei, Shuangqing; Iyengar, Sitharama; Rao, Nageswara S
2007-01-01
It has been shown lately the optimality of uncoded transmission in estimating Gaussian sources over homogeneous/symmetric Gaussian multiple access channels (MAC) using multiple sensors. It remains, however, unclear whether it still holds for any arbitrary networks and/or with high channel signal-to-noise ratio (SNR) and high signal-to-measurement-noise ratio (SMNR). In this paper, we first provide a joint source and channel coding approach in estimating Gaussian sources over Gaussian MAC channels, as well as its sufficient and necessary condition in restoring Gaussian sources with a prescribed distortion value. An interesting relationship between our proposed joint approach with a more straightforward separate source and channel coding scheme is then established. We then formulate constrained power minimization problems and transform them to relaxed convex geometric programming problems, whose numerical results exhibit that either separate or uncoded scheme becomes dominant over a linear topology network. In ...
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin
2013-01-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both, stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretiza- tion of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both, maximum likelihood methods and an improved Monte Carlo...
Kovtun, Yu V; Skibenko, A I; Yuferov, V B
2012-01-01
The processes of injection of a sputtered-and-ionized working material into the pulsed reflex discharge plasma have been considered at the initial stage of dense gas-metal plasma formation. A calculation model has been proposed to estimate the parameters of the sputtering mechanism for the required working material to be injected into the discharge. The data obtained are in good accordance with experimental results.
New FPSoC-based architecture for efficient FSBM motion estimation processing in video standards
Canals, J. A.; Martínez, M. A.; Ballester, F. J.; Mora, A.
2007-05-01
Due to the timing constraints in real time video encoding, hardware accelerator cores are used for video compression. System on Chip (SoC) designing tools offer a complex microprocessor system designing methodologies with an easy Intellectual Property (IP) core integration. This paper presents a PowerPC-based SoC with a motion-estimation accelerator core attached to the system bus. Motion-estimation (ME) algorithms are the most critical part in video compression due to the huge amount of data transfers and processing time. The main goal of our proposed architecture is to minimize the amount of memory accesses, thus exploiting the bandwidth of a direct memory connection. This architecture has been developed using Xilinx XPS, a SoC platforms design tool. The results show that our system is able to process the integer pixel full search block matching (FSBM) motion-estimation process and interframe mode decision of a QCIF frame (176*144 pixels), using a 48*48 pixel searching window, with an embedded PPC in a Xilinx Virtex-4 FPGA running at 100 MHz, in 1.5 ms, 4.5 % of the total processing time at 30 fps.
Thermal efficiency and particulate pollution estimation of four biomass fuels grown on wasteland
Energy Technology Data Exchange (ETDEWEB)
Kandpal, J.B.; Madan, M. [Indian Inst. of Tech., New Delhi (India). Centre for Rural Development and Technology
1996-10-01
The thermal performance and concentration of suspended particulate matter were studied for 1-hour combustion of four biomass fuels, namely Acacia nilotica, Leucaena leucocepholea, Jatropha curcus, and Morus alba grown in wasteland. Among the four biomass fuels, the highest thermal efficiency was achieved with Acacia nilotica. The suspended particulate matter concentration for 1-hour combustion of four biomass fuels ranged between 850 and 2,360 {micro}g/m{sup 3}.
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Thornburg, Jonathan
2010-01-01
If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...
Liénard, Jean; Lynn, Kendra; Strigul, Nikolay; Norris, Benjamin K.; Gatziolis, Demetrios; Mullarney, Julia C.; Bryan, Karin, R.; Henderson, Stephen M.
2016-09-01
Aquatic vegetation can shelter coastlines from energetic waves and tidal currents, sometimes enabling accretion of fine sediments. Simulation of flow and sediment transport within submerged canopies requires quantification of vegetation geometry. However, field surveys used to determine vegetation geometry can be limited by the time required to obtain conventional caliper and ruler measurements. Building on recent progress in photogrammetry and computer vision, we present a method for reconstructing three-dimensional canopy geometry. The method was used to survey a dense canopy of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Photogrammetric estimation of geometry required 1) taking numerous photographs at low tide from multiple viewpoints around 1 m2 quadrats, 2) computing relative camera locations and orientations by triangulation of key features present in multiple images and reconstructing a dense 3D point cloud, and 3) extracting pneumatophore locations and diameters from the point cloud data. Step 3) was accomplished by a new 'sector-slice' algorithm, yielding geometric parameters every 5 mm along a vertical profile. Photogrammetric analysis was compared with manual caliper measurements. In all 5 quadrats considered, agreement was found between manual and photogrammetric estimates of stem number, and of number × mean diameter, which is a key parameter appearing in hydrodynamic models. In two quadrats, pneumatophores were encrusted with numerous barnacles, generating a complex geometry not resolved by hand measurements. In remaining cases, moderate agreement between manual and photogrammetric estimates of stem diameter and solid volume fraction was found. By substantially reducing measurement time in the field while capturing in greater detail the 3D structure, photogrammetry has potential to improve input to hydrodynamic models, particularly for simulations of flow through large-scale, heterogenous canopies.
Directory of Open Access Journals (Sweden)
Pioz Maryline
2011-04-01
Full Text Available Abstract Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA model combined with a spatial error simultaneous autoregressive model (SARerr model to estimate the speed of diffusion of bluetongue (BT, an infectious disease of ruminants caused by bluetongue virus (BTV and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SARerr model, which led to a trend SARerr model. Overall, BT spread from north-eastern to south-western France. The average trend SARerr-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SARerr-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SARerr models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.
Pioz, Maryline; Guis, Hélène; Calavas, Didier; Durand, Benoît; Abrial, David; Ducrot, Christian
2011-04-20
Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA) model combined with a spatial error simultaneous autoregressive model (SAR(err) model) to estimate the speed of diffusion of bluetongue (BT), an infectious disease of ruminants caused by bluetongue virus (BTV) and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SAR(err) model, which led to a trend SAR(err) model. Overall, BT spread from north-eastern to south-western France. The average trend SAR(err)-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SAR(err)-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SAR(err) models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.
Light-efficient, quantum-limited interferometric wavefront estimation by virtual mode sensing.
Lauterbach, Marcel A; Ruckel, Markus; Denk, Winfried
2006-05-01
We describe and analyze an interferometer-based virtual modal wavefront sensor (VMWS) that can be configured to measure, for example, Zernike coefficients directly. This sensor is particularly light efficient because the determination of each modal coefficient benefits from all the available photons. Numerical simulations show that the VMWS outperforms state-of-the-art phase unwrapping at low light levels. Including up to Zernike mode 21, aberrations can be determined with a precision of about 0.17 rad (lambda/37) using low resolution (65 x 65 pixels) images and only about 400 photons total.
Absolute efficiency estimation of photon-number-resolving detectors using twin beams
Worsley, A P; Lundeen, J S; Mosley, P J; Smith, B J; Puentes, G; Thomas-Peter, N; Walmsley, I A; 10.1364/OE.17.004397
2009-01-01
A nonclassical light source is used to demonstrate experimentally the absolute efficiency calibration of a photon-number-resolving detector. The photon-pair detector calibration method developed by Klyshko for single-photon detectors is generalized to take advantage of the higher dynamic range and additional information provided by photon-number-resolving detectors. This enables the use of brighter twin-beam sources including amplified pulse pumped sources, which increases the relevant signal and provides measurement redundancy, making the calibration more robust.
Institute of Scientific and Technical Information of China (English)
史旭华; 钱锋
2012-01-01
Based on the immune mechanics and multi-agent technology, a multi-agent artificial immune network (Maopt-aiNet) algorithm is introduced. Maopt-aiNet makes use of the agent ability of sensing and acting to overcome premature problem, and combines the global and local search in the searching process. The performance of the proposed method is examined with 6 benchmark problems and compared with other well-known intelligent algorithms. The experiments show that Maopt-aiNet outperforms the other algorithms in these benchmark functions. Furthermore, Maopt-aiNet is applied to determine the Murphree efficiency of distillation column and satisfactory results are obtained.
Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong
2015-12-01
A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB.
Tian, Guo-Liang; Tang, Man-Lai; Fang, Hong-Bin; Tan, Ming
2008-03-15
Fitting logistic regression models is challenging when their parameters are restricted. In this article, we first develop a quadratic lower-bound (QLB) algorithm for optimization with box or linear inequality constraints and derive the fastest QLB algorithm corresponding to the smallest global majorization matrix. The proposed QLB algorithm is particularly suited to problems to which EM-type algorithms are not applicable (e.g., logistic, multinomial logistic, and Cox's proportional hazards models) while it retains the same EM ascent property and thus assures the monotonic convergence. Secondly, we generalize the QLB algorithm to penalized problems in which the penalty functions may not be totally differentiable. The proposed method thus provides an alternative algorithm for estimation in lasso logistic regression, where the convergence of the existing lasso algorithm is not generally ensured. Finally, by relaxing the ascent requirement, convergence speed can be further accelerated. We introduce a pseudo-Newton method that retains the simplicity of the QLB algorithm and the fast convergence of the Newton method. Theoretical justification and numerical examples show that the pseudo-Newton method is up to 71 (in terms of CPU time) or 107 (in terms of number of iterations) times faster than the fastest QLB algorithm and thus makes bootstrap variance estimation feasible. Simulations and comparisons are performed and three real examples (Down syndrome data, kyphosis data, and colon microarray data) are analyzed to illustrate the proposed methods.
Directory of Open Access Journals (Sweden)
Ian G Handel
Full Text Available Current post-epidemic sero-surveillance uses random selection of animal holdings. A better strategy may be to estimate the benefits gained by sampling each farm and use this to target selection. In this study we estimate the probability of undiscovered infection for sheep farms in Devon after the 2001 foot-and-mouth disease outbreak using the combination of a previously published model of daily infection risk and a simple model of probability of discovery of infection during the outbreak. This allows comparison of the system sensitivity (ability to detect infection in the area of arbitrary, random sampling compared to risk-targeted selection across a full range of sampling budgets. We show that it is possible to achieve 95% system sensitivity by sampling, on average, 945 farms with random sampling and 184 farms with risk-targeted sampling. We also examine the effect of ordering samples by risk to expedite return to a disease-free status. Risk ordering the sampling process results in detection of positive farms, if present, 15.6 days sooner than with randomly ordered sampling, assuming 50 farms are tested per day.
Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra
Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.
Unsupervised Learning for Efficient Texture Estimation From Limited Discrete Orientation Data
Niezgoda, Stephen R.; Glover, Jared
2013-11-01
The estimation of orientation distribution functions (ODFs) from discrete orientation data, as produced by electron backscatter diffraction or crystal plasticity micromechanical simulations, is typically achieved via techniques such as the Williams-Imhof-Matthies-Vinel (WIMV) algorithm or generalized spherical harmonic expansions, which were originally developed for computing an ODF from pole figures measured by X-ray or neutron diffraction. These techniques rely on ad-hoc methods for choosing parameters, such as smoothing half-width and bandwidth, and for enforcing positivity constraints and appropriate normalization. In general, such approaches provide little or no information-theoretic guarantees as to their optimality in describing the given dataset. In the current study, an unsupervised learning algorithm is proposed which uses a finite mixture of Bingham distributions for the estimation of ODFs from discrete orientation data. The Bingham distribution is an antipodally-symmetric, max-entropy distribution on the unit quaternion hypersphere. The proposed algorithm also introduces a minimum message length criterion, a common tool in information theory for balancing data likelihood with model complexity, to determine the number of components in the Bingham mixture. This criterion leads to ODFs which are less likely to overfit (or underfit) the data, eliminating the need for a priori parameter choices.
Physical indicators as a basis for estimating energy efficiency developments in the Dutch industry
Energy Technology Data Exchange (ETDEWEB)
Neelis, M.; Ramirez, A.; Patel, M.
2004-08-15
This study aims to develop an approach for the calculation of developments in energy efficiency in the industrial sector in the Netherlands using activity indicators based on physical production. The approach should fit in the calculation routine of the protocol monitoring energy savings and should be based on data available from Statistics Netherlands or from open literature. More specifically, the scope of the study is: To develop a spreadsheet tool for the calculation of energy efficiency developments in the Dutch industrial sector; To apply this tool for the period 1993-2001 with 1995 as the base year of analysis; To compare the tool with the method applied until now for the industrial sector in the PME (Protocol Monitoring Energy savings), which was mainly based on the LTAs (Long Term Agreements); To assess the uncertainties involved. The methodology of the PME and this study will in detail be discussed in chapter 2, followed by chapter 3 on the data availability and the use of the spreadsheet tool for future studies. The analyses and results for the sectors distinguished in chapter 2 are discussed in chapter 4-10, followed by a chapter in which the main conclusions of this study are given and in which recommendations are given.
Energy Technology Data Exchange (ETDEWEB)
Roes, L.; Neelis, M.; Ramirez, A.
2007-04-15
In 2004, a method was developed for calculating energy efficiency developments in the Dutch manufacturing industry using physical indicators of production. The method and its application to calculate energy efficiency developments in the Dutch manufacturing industry for the time period 1993-2001 is described in another. The method is used as part of the yearly calculation of energy savings in the Netherlands according to the Protocol Monitoring Energy Savings performed by the Platform Monitoring Energy Savings. On request of this platform, the calculations carried out in 2004 have been updated in 2005 for the years 2002 and 2003 and were published in the 2005 update. In this report, an update is made for the years 2004 and 2005. The authors present the results of the extended calculations for the years 1995-2005. In Chapter 2 of this report, an overview is given of data sources that were used in this study. In Chapter 3, changes compared to the analysis from 2005 are discussed and the results presented. It should be emphasised that this report does not contain background information on the method applied. For information on the method see the 2004 report. Furthermore, the focus of this report is on presenting the results of the calculations. Less attention is paid to analysing, explaining and interpreting the results.
Energy Technology Data Exchange (ETDEWEB)
Neelis, M.; Ramirez, A.; Patel, M.
2005-07-15
In 2004, a method was developed for calculating energy efficiency developments in the Dutch manufacturing industry using physical indicators of production. The method and its application to calculate energy efficiency developments in the Dutch manufacturing industry for the time period 1993-2001 is described in elsewhere. The method is used as part of the yearly calculation of energy savings in the Netherlands according to the Protocol Monitoring Energy Savings performed by the Platform Monitoring Energy Savings. On request of this platform, the calculations done in 2004 are updated and extended by two additional years (2002 and 2003) for which production statistics have, in the meantime, become available. In this report, we give the results of the extended calculations for the years 1993-2003. In Chapter 2 the additional data sources used are summarized and compared to the analysis done in 2004. The results are given in Chapter 3. It should be emphasised that the authors do not give any background on the method applied for which they refer to the 2004 report. Furthermore, they focus in this report on presenting the results of the calculations and only give minor attention to analysing, explaining and interpreting the results that were found.
Vanícek, Jirí; Miller, William H
2007-09-21
The quantum instanton approximation is used to compute kinetic isotope effects for intramolecular hydrogen transfer in cis-1,3-pentadiene. Due to the importance of skeleton motions, this system with 13 atoms is a simple prototype for hydrogen transfer in enzymatic reactions. The calculation is carried out using thermodynamic integration with respect to the mass of the isotopes and a path integral Monte Carlo evaluation of relevant thermodynamic quantities. Efficient "virial" estimators are derived for the logarithmic derivatives of the partition function and the delta-delta correlation functions. These estimators require significantly fewer Monte Carlo samples since their statistical error does not increase with the number of discrete time slices in the path integral. The calculation treats all 39 degrees of freedom quantum mechanically and uses an empirical valence bond potential based on a molecular mechanics force field.
Karwowski, Damian; Domański, Marek
2016-01-01
An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.
Efficient estimation algorithms for a satellite-aided search and rescue mission
Argentiero, P.; Garza-Robles, R.
1977-01-01
It has been suggested to establish a search and rescue orbiting satellite system as a means for locating distress signals from downed aircraft, small boats, and overland expeditions. Emissions from Emergency Locator Transmitters (ELT), now available in most U.S. aircraft are to be utilized in the positioning procedure. A description is presented of a set of Doppler navigation algorithms for extracting ELT position coordinates from Doppler data. The algorithms have been programmed for a small computing machine and the resulting system has successfully processed both real and simulated Doppler data. A software system for solving the Doppler navigation problem must include an orbit propagator, a first guess algorithm, and an algorithm for estimating longitude and latitude from Doppler data. Each of these components is considered.
Institute of Scientific and Technical Information of China (English)
AkiraOGAWA; TetzuyaIWANAMI; 等
1997-01-01
In order to estimate the cut-size Xc and the mechanically balanced particles in the axial flow cyclone with the slit-separation method,the tangential velocity distributions were calculated by the finite difference method.In comparison of the calculated results of the total collection effciency with the experimental results,the calculated results showed a little higher than the experimental results due to the re-entrainment of the collected particles by turbulence.The effect of the slit for promoting the collection efficiency was not recognized.
Directory of Open Access Journals (Sweden)
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
An Approach to the Estimation of the Packing Efficiency by Considering Gas and Liquid Axial Mixings
Institute of Scientific and Technical Information of China (English)
唐忠利; 刘春江; 袁希钢; 余国琮
2004-01-01
To evaluate the influence of gas and liquid axial mixings on the separation efficiency of packed column, an approximate mathematical solution of HETP (equivalent height to a theoretical plate) under continuous operation has been proposed based on the mixing pool model. The mass transfer and hydrodynamic data of structured packing, Mellapak 350Y, obtained in a high pressure tower have been used to test the validity of the proposed model. Compared with the SRP model and the Gualito model, it is found that for high pressure distillation process the present mathematical prediction shows a mean relative error of about 10% to the experimental data,the accuracy of which is the same as that by the Gualito model but better than that by the SRP model.
Efficient architecture for global elimination algorithm for H.264 motion estimation
Indian Academy of Sciences (India)
P Muralidhar; C B Ramarao
2016-01-01
This paper presents a fast block matching motion esti mation algorithm and its architecture. The proposed architecture is based on Global Elimination (GE) Algorithm, which uses pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. GE uses a preprocessing stage which can skip unnecessary Sum Absolute Difference (SAD) calculations by comparing minimum SAD with sub-sampled SAD (SSAD). In the second stage SAD is computed at roughly matched candidate positions. GE algorithm uses fixed sub-block sizes and shapes to compute SSAD values in preprocessing stage. Complexity of this GE algorithm is further reduced by adaptively changing the sub-block sizes depending on the macroblock features. In this paper adaptive Global Elimination algorithm has been implemented which reduces the computational complexity of motion estimation algorithm and thus resulted in low power dissipation. Proposed architecture achieved 60% less number of computations compared to existing full search architecture and 50% high throughput compared to existing fixed Global Elimination Architecture.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Kang, Seungha; Denman, Stuart E; Morrison, Mark; Yu, Zhongtang; McSweeney, Chris S
2009-05-01
An extraction method was developed to recover high-quality RNA from rumen digesta and mouse feces for phylogenetic analysis of metabolically active members of the gut microbial community. Four extraction methods were tested on different amounts of the same samples and compared for efficiency of recovery and purity of RNA. Trizol extraction after bead beating produced a higher quantity and quality of RNA than a similar method using phenol/chloroform. Dissociation solution produced a 1.5- to 2-fold increase in RNA recovery compared with phosphate-buffered saline during the dissociation of microorganisms from rumen digesta or fecal particles. The identity of metabolically active bacteria in the samples was analyzed by sequencing 87 amplicons produced using bacteria-specific 16S rDNA primers, with cDNA synthesized from the extracted RNA as the template. Amplicons representing the major phyla encountered in the rumen (Firmicutes, 43.7%; Proteobacteria, 28.7%; Bacteroidetes, 25.3%; Spirochea, 1.1%, and Synergistes, 1.1%) were recovered, showing that development of the RNA extraction method enables RNA-based analysis of metabolically active bacterial groups from the rumen and other environments. Interestingly, in rumen samples, about 30% of the sequenced random 16S rRNA amplicons were related to the Proteobacteria, providing the first evidence that this group may have greater importance in rumen metabolism than previously attributed by DNA-based analysis.
Ly-alpha forest: efficient unbiased estimation of second-order properties with missing data
Vio, R; Stoyan, H; Stoyan, D
2007-01-01
Context. One important step in the statistical analysis of the Ly-alpha forest data is the study of their second order properties. Usually, this is accomplished by means of the two-point correlation function or, alternatively, the K-function. In the computation of these functions it is necessary to take into account the presence of strong metal line complexes and strong Ly-alpha lines that can hidden part of the Ly-alpha forest and represent a non negligible source of bias. Aims. In this work, we show quantitatively what are the effects of the gaps introduced in the spectrum by the strong lines if they are not properly accounted for in the computation of the correlation properties. We propose a geometric method which is able to solve this problem and is computationally more efficient than the Monte Carlo (MC) technique that is typically adopted in Cosmology studies. The method is implemented in two different algorithms. The first one permits to obtain exact results, whereas the second one provides approximate...
Efficient synthesis of tension modulation in strings and membranes based on energy estimation.
Avanzini, Federico; Marogna, Riccardo; Bank, Balázs
2012-01-01
String and membrane vibrations cannot be considered as linear above a certain amplitude due to the variation in string or membrane tension. A relevant special case is when the tension is spatially constant and varies in time only in dependence of the overall string length or membrane surface. The most apparent perceptual effect of this tension modulation phenomenon is the exponential decay of pitch in time. Pitch glides due to tension modulation are an important timbral characteristic of several musical instruments, including the electric guitar and tom-tom drum, and many ethnic instruments. This paper presents a unified formulation to the tension modulation problem for one-dimensional (1-D) (string) and two-dimensional (2-D) (membrane) cases. In addition, it shows that the short-time average of the tension variation, which is responsible for pitch glides, is approximately proportional to the system energy. This proportionality allows the efficient physics-based sound synthesis of pitch glides. The proposed models require only slightly more computational resources than linear models as opposed to earlier tension-modulated models of higher complexity.
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
Qiu, Bingwen; Feng, Min; Tang, Zhenghong
2016-05-01
This study proposed a simple Smoother without any local adjustments based on Continuous Wavelet Transform (SCWT). And then it evaluated its performance together with other commonly applied techniques in phenological estimation. These noise reduction methods included Savitzky-Golay filter (SG), Double Logistic function (DL), Asymmetric Gaussian function (AG), Whittaker Smoother (WS) and Harmonic Analysis of Time-Series (HANTS). They were evaluated based on fidelity and smoothness, and their efficiencies in deriving phenological parameters through the inflexion point-based method with the 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) in 2013 in China. The following conclusions were drawn: (1) The SG method exhibited strong fidelity, but weak smoothness and spatial continuity. (2) The HANTS method had very robust smoothness but weak fidelity. (3) The AG and DL methods performed weakly for vegetation with more than one growth cycle (i.e., multiple crops). (4) The WS and SCWT smoothers outperformed others with combined considerations of fidelity and smoothness, and consistent phenological patterns (correlation coefficients greater than 0.8 except evergreen broadleaf forests (0.68)). (5) Compared with WS methods, the SCWT smoother was capable in preservation of real local minima and maxima with fewer inflexions. (6) Large discrepancy was examined from the estimated phenological dates with SG and HANTS methods, particularly in evergreen forests and multiple cropping regions (the absolute mean deviation rates were 6.2-17.5 days and correlation coefficients less than 0.34 for estimated start dates).
Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente
2003-11-01
Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected by shrinkage or expansion of the tissue. The optical fractionator technique is very efficient but requires relatively thick sections (e.g., > or =20 microm after coverslipping) and the unequivocal identification of labeled cells throughout the section thickness. We have adapted our protocol for Mac-1 immunohistochemical visualization of microglial cells in thick (70 microm) vibratome sections for stereological counting within the murine hippocampus, and we have compared the staining results with other selective microglial markers: the histochemical demonstration of nucleotide diphosphatase (NDPase) activity and the tomato lectin histochemistry. The protocol gives sections of high quality with a final mean section thickness of >20 microm (h=22.3 microm +/- 0.64 microm), and with excellent rendition of Mac-1+ microglia through the entire height of the section. The NDPase staining gives an excellent visualization of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics of the microglial cell population in the developing and in the injured, diseased and normal adult CNS.
Directory of Open Access Journals (Sweden)
Earp Madalene A
2011-11-01
Full Text Available Abstract Background Until recently, genome-wide association studies (GWAS have been restricted to research groups with the budget necessary to genotype hundreds, if not thousands, of samples. Replacing individual genotyping with genotyping of DNA pools in Phase I of a GWAS has proven successful, and dramatically altered the financial feasibility of this approach. When conducting a pool-based GWAS, how well SNP allele frequency is estimated from a DNA pool will influence a study's power to detect associations. Here we address how to control the variance in allele frequency estimation when DNAs are pooled, and how to plan and conduct the most efficient well-powered pool-based GWAS. Methods By examining the variation in allele frequency estimation on SNP arrays between and within DNA pools we determine how array variance [var(earray] and pool-construction variance [var(econstruction] contribute to the total variance of allele frequency estimation. This information is useful in deciding whether replicate arrays or replicate pools are most useful in reducing variance. Our analysis is based on 27 DNA pools ranging in size from 74 to 446 individual samples, genotyped on a collective total of 128 Illumina beadarrays: 24 1M-Single, 32 1M-Duo, and 72 660-Quad. Results For all three Illumina SNP array types our estimates of var(earray were similar, between 3-4 × 10-4 for normalized data. Var(econstruction accounted for between 20-40% of pooling variance across 27 pools in normalized data. Conclusions We conclude that relative to var(earray, var(econstruction is of less importance in reducing the variance in allele frequency estimation from DNA pools; however, our data suggests that on average it may be more important than previously thought. We have prepared a simple online tool, PoolingPlanner (available at http://www.kchew.ca/PoolingPlanner/, which calculates the effective sample size (ESS of a DNA pool given a range of replicate array values. ESS can
El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2014-07-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Sinclair, Michael; Dufour, Pascal; Drew, Kristine; Myrskog, Stefan; Morgan, John Paul
2014-10-01
An electroluminescence test for a Concentrated PV system is presented with the objective of capturing high resolution pseudo-efficiency maps that highlight optical defects in the concentrator system. Key parameters of the experimental setup and imaging system are presented. Image processing is discussed, including comparison of experimental to nominal results and the quantitative estimation of optical efficiency. Efficiency estimates are validated using measurements under a collimated solar simulator and ray-tracing software. Further validation is performed by comparison of the electroluminescence technique to direct mapping of the optical efficiency. Initial results indicate the mean estimation error for Isc is -2.4% with a standard deviation is 6.9% and a combined measurement and analysis time of less than 5 seconds per optic. An extension of this approach to in-line quality control is discussed.
Directory of Open Access Journals (Sweden)
Mohammad Manir Hossain Mollah
Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large
Directory of Open Access Journals (Sweden)
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Scartazza, Andrea; Vaccari, Francesco Primo; Bertolini, Teresa; Di Tommasi, Paul; Lauteri, Marco; Miglietta, Franco; Brugnoli, Enrico
2014-10-01
Water-use efficiency (WUE), thought to be a relevant trait for productivity and adaptation to water-limited environments, was estimated for three different ecosystems on the Mediterranean island of Pianosa: Mediterranean macchia (SMM), transition (S(TR)) and abandoned agricultural (SAA) ecosystems, representing a successional series. Three independent approaches were used to study WUE: eddy covariance measurements, C isotope composition of ecosystem respired CO2, and C isotope discrimination (Δ) of leaf material (dry matter and soluble sugars). Seasonal variations in C-water relations and energy fluxes, compared in S(MM) and in SAA, were primarily dependent on the specific composition of each plant community. WUE of gross primary productivity was higher in SMM than in SAA at the beginning of the dry season. Both structural and fast-turnover leaf material were, on average, more enriched in (13)C in S(MM) than SAA, indicating relatively higher stomatal control and WUE for the long-lived macchia species. This pattern corresponded to (13)C-enriched respired CO2 in SMM compared to the other ecosystems. Conversely, most of the annual herbaceous SAA species (terophytes) showed a drought-escaping strategy, with relatively high stomatal conductance and low WUE. An ecosystem-integrated Δ value was weighted for each ecosystem on the abundance of different life forms, classified according to Raunkiar's system. Agreement was found between ecosystem WUE calculated using eddy covariance and those estimated using integrated Δ approaches. Comparing the isotopic methods, Δ of leaf soluble sugars provided the most reliable proxy for short-term changes in photosynthetic discrimination and associated shifts in integrated canopy-level WUE along the successional series.
Zhang, Dong; Zhang, Xiaolei; Yuan, Jianzheng; Ke, Rui; Yang, Yan; Hu, Ying
2016-01-01
The Laplace-Fourier domain full waveform inversion can simultaneously restore both the long and intermediate short-wavelength information of velocity models because of its unique characteristics of complex frequencies. This approach solves the problem of conventional frequency-domain waveform inversion in which the inversion result is excessively dependent on the initial model due to the lack of low frequency information in seismic data. Nevertheless, the Laplace-Fourier domain waveform inversion requires substantial computational resources and long computation time because the inversion must be implemented on different combinations of multiple damping constants and multiple frequencies, namely, the complex frequencies, which are much more numerous than the Fourier frequencies. However, if the entire target model is computed on every complex frequency for the Laplace-Fourier domain inversion (as in the conventional frequency domain inversion), excessively redundant computation will occur. In the Laplace-Fourier domain waveform inversion, the maximum depth penetrated by the seismic wave decreases greatly due to the application of exponential damping to the seismic record, especially with use of a larger damping constant. Thus, the depth of the area effectively inverted on a complex frequency tends to be much less than the model depth. In this paper, we propose a method for quantitative estimation of the effective inversion depth in the Laplace-Fourier domain inversion based on the principle of seismic wave propagation and mathematical analysis. According to the estimated effective inversion depth, we can invert and update only the model area above the effective depth for every complex frequency without loss of accuracy in the final inversion result. Thus, redundant computation is eliminated, and the efficiency of the Laplace-Fourier domain waveform inversion can be improved. The proposed method was tested in numerical experiments. The experimental results show that
Directory of Open Access Journals (Sweden)
Y. Tramblay
2011-01-01
Full Text Available A good knowledge of rainfall is essential for hydrological operational purposes such as flood forecasting. The objective of this paper was to analyze, on a relatively large sample of flood events, how rainfall-runoff modeling using an event-based model can be sensitive to the use of spatial rainfall compared to mean areal rainfall over the watershed. This comparison was based not only on the model's efficiency in reproducing the flood events but also through the estimation of the initial conditions by the model, using different rainfall inputs. The initial conditions of soil moisture are indeed a key factor for flood modeling in the Mediterranean region. In order to provide a soil moisture index that could be related to the initial condition of the model, the soil moisture output of the Safran-Isba-Modcou (SIM model developed by Météo-France was used. This study was done in the Gardon catchment (545 km^{2} in South France, using uniform or spatial rainfall data derived from rain gauge and radar for 16 flood events. The event-based model considered combines the SCS runoff production model and the Lag and Route routing model. Results show that spatial rainfall increases the efficiency of the model. The advantage of using spatial rainfall is marked for some of the largest flood events. In addition, the relationship between the model's initial condition and the external predictor of soil moisture provided by the SIM model is better when using spatial rainfall, in particular when using spatial radar data with R^{2} values increasing from 0.61 to 0.72.
Syrejshchikova, T. I.; Gryzunov, Yu. A.; Smolina, N. V.; Komar, A. A.; Uzbekov, M. G.; Misionzhnik, E. J.; Maksimova, N. M.
2010-05-01
The efficiency of the therapy of psychiatric diseases is estimated using the fluorescence measurements of the conformational changes of human serum albumin in the course of medical treatment. The fluorescence decay curves of the CAPIDAN probe (N-carboxyphenylimide of the dimethylaminonaphthalic acid) in the blood serum are measured. The probe is specifically bound to the albumin drug binding sites and exhibits fluorescence as a reporter ligand. A variation in the conformation of the albumin molecule substantially affects the CAPIDAN fluorescence decay curve on the subnanosecond time scale. A subnanosecond pulsed laser or a Pico-Quant LED excitation source and a fast photon detector with a time resolution of about 50 ps are used for the kinetic measurements. The blood sera of ten patients suffering from depression and treated at the Institute of Psychiatry were preliminary clinically tested. Blood for analysis was taken from each patient prior to the treatment and on the third week of treatment. For ten patients, the analysis of the fluorescence decay curves of the probe in the blood serum using the three-exponential fitting shows that the difference between the amplitudes of the decay function corresponding to the long-lived (9 ns) fluorescence of the probe prior to and after the therapeutic procedure reliably differs from zero at a significance level of 1% ( p < 0.01).
Volosencu, Constantin; Curiac, Daniel-Ioan
2013-12-01
This paper gives a technical solution to improve the efficiency in multi-sensor wireless network based estimation for distributed parameter systems. A complex structure based on some estimation algorithms, with regression and autoregression, implemented using linear estimators, neural estimators and ANFIS estimators, is developed for this purpose. The three kinds of estimators are working with precision on different parts of the phenomenon characteristic. A comparative study of three methods - linear and nonlinear based on neural networks and adaptive neuro-fuzzy inference system - to implement these algorithms is made. The intelligent wireless sensor networks are taken in consideration as an efficient tool for measurement, data acquisition and communication. They are seen as a "distributed sensor", placed in the desired positions in the measuring field. The algorithms are based on regression using values from adjacent and also on auto-regression using past values from the same sensor. A modelling and simulation for a case study is presented. The quality of estimation is validated using a quadratic criterion. A practical implementation is made using virtual instrumentation. Applications of this complex estimation system are in fault detection and diagnosis of distributed parameter systems and discovery of malicious nodes in wireless sensor networks.
Marshall, M.; Tu, K. P.
2015-12-01
Large-area crop yield models (LACMs) are commonly employed to address climate-driven changes in crop yield and inform policy makers concerned with climate change adaptation. Production efficiency models (PEMs), a class of LACMs that rely on the conservative response of carbon assimilation to incoming solar radiation absorbed by a crop contingent on environmental conditions, have increasingly been used over large areas with remote sensing spectral information to improve the spatial resolution of crop yield estimates and address important data gaps. Here, we present a new PEM that combines model principles from the remote sensing-based crop yield and evapotranspiration (ET) model literature. One of the major limitations of PEMs is that they are evaluated using data restricted in both space and time. To overcome this obstacle, we first validated the model using 2009-2014 eddy covariance flux tower Gross Primary Production data in a rice field in the Central Valley of California- a critical agro-ecosystem of the United States. This evaluation yielded a Willmot's D and mean absolute error of 0.81 and 5.24 g CO2/d, respectively, using CO2, leaf area, temperature, and moisture constraints from the MOD16 ET model, Priestley-Taylor ET model, and the Global Production Efficiency Model (GLOPEM). A Monte Carlo simulation revealed that the model was most sensitive to the Enhanced Vegetation Index (EVI) input, followed by Photosynthetically Active Radiation, vapor pressure deficit, and air temperature. The model will now be evaluated using 30 x 30m (Landsat resolution) biomass transects developed in 2011 and 2012 from spectroradiometric and other non-destructive in situ metrics for several cotton, maize, and rice fields across the Central Valley. Finally, the model will be driven by Daymet and MODIS data over the entire State of California and compared with county-level crop yield statistics. It is anticipated that the new model will facilitate agro-climatic decision-making in
Institute of Scientific and Technical Information of China (English)
KUK Anthony
2009-01-01
@@ The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on working correlation matrices, optimal weighting, composite likelihood and various variants have been proposed in the categorical data literature, the working independence estimator is still very much the prevalent estimator in multivariate survival data analysis.
Sinchuk, O. N.; Sinchuk, I. O.; Jakimets, S. N.; Kljuchka, A. S.
2010-01-01
The analysis of efficiency of the use of different types of the electric braking by the hauling electromechanics of mines electric locomotives is resulted. Efficiency of the electrodynamic braking is led to. The schemotechnycs decisions are resulted.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
Yin, X.; Belay, D.; Putten, van der P.E.L.; Struik, P.C.
2014-01-01
Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (UCO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation
Directory of Open Access Journals (Sweden)
Larisa Vajenina
2013-10-01
Full Text Available The author views the method of analysis of th e hierarchies to assessing energy efficiency in the enterprises of the main transport of gas. The method allows to investigate the consumption of energy resources of equipment, consumption of resources in technological operations and the creation of favorable conditions, and to assess the state of accounting systems and the work organization to improve the efficiency of the energy supply use.
Stange, P.; Bach, L. T.; Le Moigne, F. A. C.; Taucher, J.; Boxhammer, T.; Riebesell, U.
2017-01-01
The ocean's potential to export carbon to depth partly depends on the fraction of primary production (PP) sinking out of the euphotic zone (i.e., the e-ratio). Measurements of PP and export flux are often performed simultaneously in the field, although there is a temporal delay between those parameters. Thus, resulting e-ratio estimates often incorrectly assume an instantaneous downward export of PP to export flux. Evaluating results from four mesocosm studies, we find that peaks in organic matter sedimentation lag chlorophyll a peaks by 2 to 15 days. We discuss the implications of these time lags (TLs) for current e-ratio estimates and evaluate potential controls of TL. Our analysis reveals a strong correlation between TL and the duration of chlorophyll a buildup, indicating a dependency of TL on plankton food web dynamics. This study is one step further toward time-corrected e-ratio estimates.
Elsukov, V. K.; Latushkina, S. V.
2014-10-01
Problems of the mathematical estimation of the amount of recirculating ash and its effect upon the efficiency of gas treating within ash collectors with the scroll and semiscroll gas supply, which are equipped by a gas and ash recirculation system, are considered. Based on the analysis of various publications and operational experience, a conclusion is drawn regarding the complex and substantial effect of the recirculation system upon the ash collector efficiency. The following research tasks are posed: computational determination of ash weight at the ash collector inlet subject to its recirculation, development of measures for enhancement of the ash collector, and estimation of these measures. A computation procedure for consumption of recirculating ash in the ash collector and its sections with the use of formulas of the geometrical progression is represented. Based on the represented procedure as applied to a TsBR-150U-1280 multicyclone collector collecting ash of coal of an Irsha-Borodinsk coalfield, corresponding ash consumptions are determined, including that at which the effective operation of the ash collector is provided. Various variants of the modernization of the mentioned multicyclone collector are developed and estimated. Conclusions are drawn regarding the necessity for further investigations for improvement of the represented procedure, in particular, the effect of the gas speed (boiler load) upon the efficiency of various ash collector units, recirculating ash consumption, and clogging their cyclone units.
Geel, C.; Versluis, W.; Snel, J.F.H.
1997-01-01
The relation between photosynthetic oxygen evolution and Photosystem II electron transport was investigated for the marine algae t Phaeodactylum tricornutum, Dunaliella tertiolecta, Tetraselmis sp., t Isochrysis sp. and t Rhodomonas sp.. The rate of Photosystem II electron transport was estimated fr
Rijmen, Frank
2009-01-01
Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
H. Macharia (Harrison); P.P. Goos (Peter)
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela;
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given r...
高效超分辨波达方向估计算法综述%Overview of efficient algorithms for super-resolution DOA estimates
Institute of Scientific and Technical Information of China (English)
闫锋刚; 沈毅; 刘帅; 金铭; 乔晓林
2015-01-01
Computationally efficient methods for super-resolution direction of arrival (DOA)estimation aim to reduce the complexity of conventional techniques,to economize on the costs of systems and to enhance the ro-bustness of DOA estimators against array geometries and other environmental restrictions,which has been an important topic in the field.According to the theory and elements of the multiple signal classification (MUSIC) algorithm and the primary derivations from MUSIC,state-of-the-art efficient super-resolution DOA estimators are classified into five different types.These five types of approaches reduce the complexity by real-valued com-putation,beam-space transformation,fast subspace estimation,rapid spectral search,and no spectral search, respectively.With such a classification,comprehensive overviews of each kind of efficient methods are given and numerical comparisons among these estimators are also conducted to illustrate their advantages.Future develop-ment trends of efficient algorithms for super-resolution DOA estimates are finally predicted with basic require-ments of real-world applications.%高效超分辨波达方向估计算法致力于降低超分辨算法的计算量、节约系统的实现成本、弱化算法对于阵列结构的依赖性，是推进超分辨理论工程化的一个重要研究课题。从多重信号分类（multiple signal classifi-cation，MUSIC）算法的原理和构成要素入手，以基于 MUSIC 派生高效超分辨算法的目的和方法为标准，将现存高效超分辨算法划分为实值运算、波束域变换、快速子空间估计、快速峰值搜索和免峰值搜索5大类。在此基础上，全面回顾总结了各类高效算法的发展历程和最新进展，对比分析了它们的主要优缺点。最后，结合空间谱估计实际工程化的应用需求，指出了高效超分辨算法的未来发展趋势。
Cardot, Hervé; Zitt, Pierre-André
2011-01-01
With the progress of measurement apparatus and the development of automatic sensors it is not unusual anymore to get thousands of samples of observations taking values in high dimension spaces such as functional spaces. In such large samples of high dimensional data, outlying curves may not be uncommon and even a few individuals may corrupt simple statistical indicators such as the mean trajectory. We focus here on the estimation of the geometric median which is a direct generalization of the real median and has nice robustness properties. The geometric median being defined as the minimizer of a simple convex functional that is differentiable everywhere when the distribution has no atoms, it is possible to estimate it with online gradient algorithms. Such algorithms are very fast and can deal with large samples. Furthermore they also can be simply updated when the data arrive sequentially. We state the almost sure consistency and the L2 rates of convergence of the stochastic gradient estimator as well as the ...
Kozai, Toyoki
2013-01-01
Extensive research has recently been conducted on plant factory with artificial light, which is one type of closed plant production system (CPPS) consisting of a thermally insulated and airtight structure, a multi-tier system with lighting devices, air conditioners and fans, a CO2 supply unit, a nutrient solution supply unit, and an environment control unit. One of the research outcomes is the concept of resource use efficiency (RUE) of CPPS.This paper reviews the characteristics of the CPPS compared with those of the greenhouse, mainly from the viewpoint of RUE, which is defined as the ratio of the amount of the resource fixed or held in plants to the amount of the resource supplied to the CPPS.It is shown that the use efficiencies of water, CO2 and light energy are considerably higher in the CPPS than those in the greenhouse. On the other hand, there is much more room for improving the light and electric energy use efficiencies of CPPS. Challenging issues for CPPS and RUE are also discussed.
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Directory of Open Access Journals (Sweden)
Riesgo Ana
2012-11-01
Full Text Available Abstract Introduction Traditionally, genomic or transcriptomic data have been restricted to a few model or emerging model organisms, and to a handful of species of medical and/or environmental importance. Next-generation sequencing techniques have the capability of yielding massive amounts of gene sequence data for virtually any species at a modest cost. Here we provide a comparative analysis of de novo assembled transcriptomic data for ten non-model species of previously understudied animal taxa. Results cDNA libraries of ten species belonging to five animal phyla (2 Annelida [including Sipuncula], 2 Arthropoda, 2 Mollusca, 2 Nemertea, and 2 Porifera were sequenced in different batches with an Illumina Genome Analyzer II (read length 100 or 150 bp, rendering between ca. 25 and 52 million reads per species. Read thinning, trimming, and de novo assembly were performed under different parameters to optimize output. Between 67,423 and 207,559 contigs were obtained across the ten species, post-optimization. Of those, 9,069 to 25,681 contigs retrieved blast hits against the NCBI non-redundant database, and approximately 50% of these were assigned with Gene Ontology terms, covering all major categories, and with similar percentages in all species. Local blasts against our datasets, using selected genes from major signaling pathways and housekeeping genes, revealed high efficiency in gene recovery compared to available genomes of closely related species. Intriguingly, our transcriptomic datasets detected multiple paralogues in all phyla and in nearly all gene pathways, including housekeeping genes that are traditionally used in phylogenetic applications for their purported single-copy nature. Conclusions We generated the first study of comparative transcriptomics across multiple animal phyla (comparing two species per phylum in most cases, established the first Illumina-based transcriptomic datasets for sponge, nemertean, and sipunculan species, and
Estimating the efficiency of P/V systems under a changing climate - the case study of Greece.
Grillakis, Manolis; Panagea, Ioanna; Koutroulis, Aristeidis; Tsanis, Ioannis
2014-05-01
The effect of climate change on P/V output is studied for the region of Greece. Solar radiation and temperature data from 9 RCMs of ENSEMBLES EU FP6 project are used to estimate the effect of these two parameters on the future P/V systems output over Greece. Examining the relative contributions of temperature and irradiance, a significant reduction due to the temperature increase is projected which is however outweighed by the irradiance increase, resulting an overall output increase in photovoltaic systems. Nonetheless, in some cases the temperature increase is too large to be compensated by the increase irradiance resulting reduction of PV output up to 3. This is projected after 2050s for the eastern parts of the Greek mainland, Aegean islands and some areas in Crete. Results show that the PV output is projected to have an increasing trend in all regions of Greece until 2050, and a steeper increase trend further until 2100. Moreover, high resolution topographic information was combined to the PV output results, producing high resolution of favorability for future PV systems installation.
Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.
2012-10-01
One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.
Rispail, Nicolas; Rubiales, Diego
2015-01-01
Fusarium wilts are widespread diseases affecting most agricultural crops. In absence of efficient alternatives, sowing resistant cultivars is the preferred approach to control this disease. However, actual resistance sources are often overcome by new pathogenic races, forcing breeders to continuously search for novel resistance sources. Selection of resistant accessions, mainly based on the evaluation of symptoms at timely intervals, is highly time-consuming. Thus, we tested the potential of an infra-red imaging system in plant breeding to speed up this process. For this, we monitored the changes in surface leaf temperature upon infection by F. oxysporum f. sp. pisi in several pea accessions with contrasting response to Fusarium wilt under a controlled environment. Using a portable infra-red imaging system we detected a significant temperature increase of at least 0.5 °C after 10 days post-inoculation in the susceptible accessions, while the resistant accession temperature remained at control level. The increase in leaf temperature at 10 days post-inoculation was positively correlated with the AUDPC calculated over a 30 days period. Thus, this approach allowed the early discrimination between resistant and susceptible accessions. As such, applying infra-red imaging system in breeding for Fusarium wilt resistance would contribute to considerably shorten the process of selection of novel resistant sources.
Zou, C X; Lively, F O; Wylie, A R G; Yan, T
2016-04-01
Seventeen non-lactating dairy-bred suckler cows (LF; Limousin×Holstein-Friesian) and 17 non-lactating beef composite breed suckler cows (ST; Stabiliser) were used to study enteric methane emissions and energy and nitrogen (N) utilization from grass silage diets. Cows were housed in cubicle accommodation for 17 days, and then moved to individual tie-stalls for an 8-day digestibility balance including a 2-day adaption followed by immediate transfer to an indirect, open-circuit, respiration calorimeters for 3 days with gaseous exchange recorded over the last two of these days. Grass silage was offered ad libitum once daily at 0900 h throughout the study. There were no significant differences (P>0.05) between the genotypes for energy intakes, energy outputs or energy use efficiency, or for methane emission rates (methane emissions per unit of dry matter intake or energy intake), or for N metabolism characteristics (N intake or N output in faeces or urine). Accordingly, the data for both cow genotypes were pooled and used to develop relationships between inputs and outputs. Regression of energy retention against ME intake (r 2=0.52; Penergy requirements for maintenance of 0.386, 0.392 and 0.375 MJ/kg0.75 for LF+ST, LF and ST respectively. Methane energy output was 0.066 of gross energy intake when the intercept was omitted from the linear equation (r 2=0.59; Penergy requirement, methane emission and manure N output for suckler cows and further information is required to evaluate their application in a wide range of suckler production systems.
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments
Phesatcha, Burarat; Wanapat, Metha; Phesatcha, Kampanat; Ampapon, Thiwakorn; Kang, Sungchhang
2016-10-01
Four rumen-fistulated dairy steers, 3 years old with 180 ± 15 kg body weight (BW), were randomly assigned according to a 4 × 4 Latin square design to investigate on the effect of Flemingia macrophylla hay meal (FMH) and cassava hay meal (CH) supplementation on rumen fermentation efficiency and estimated methane production. The treatments were as follows: T1 = non-supplement, T2 = CH supplementation at 150 g/head/day, T3 = FMH supplementation at 150 g/head/day, and T4 = CH + FMH supplementation at 75 and 75 g/head/day. All steers were fed rice straw ad libitum and concentrate was offered at 0.5 % of BW. Results revealed that supplementation of CH and/or FMH did not affect on feed intake (P > 0.05) while digestibility of crude protein and neutral detergent fiber were increased especially in steers receiving FMH and CH+FMH (P methane production were decreased by dietary treatments. Protozoa and fungi population were not affected by dietary supplement while viable bacteria count increased in steers receiving FMH. Supplementation of FMH and/or FMH+CH increased microbial crude protein and efficiency of microbial nitrogen supply. This study concluded FMH (150 g/head/day) and/or CH+FMH (75 and 75 g/head/day) supplementation could be used as a rumen enhancer for increasing nutrient digestibility, rumen fermentation efficiency, and microbial protein synthesis while decreasing estimated methane production without adverse effect on voluntary feed intake of dairy steers fed rice straw.
Ionkin, I. L.; Ragutkin, A. V.; Luning, B.; Zaichenko, M. N.
2016-06-01
For enhancement of the natural gas utilization efficiency in boilers, condensation heat utilizers of low-potential heat, which are constructed based on a contact heat exchanger, can be applied. A schematic of the contact heat exchanger with a humidifier for preheating and humidifying of air supplied in the boiler for combustion is given. Additional low-potential heat in this scheme is utilized for heating of the return delivery water supplied from a heating system. Preheating and humidifying of air supplied for combustion make it possible to use the condensation utilizer for heating of a heat-transfer agent to temperature exceeding the dewpoint temperature of water vapors contained in combustion products. The decision to mount the condensation heat utilizer on the boiler was taken based on the preliminary estimation of the additionally obtained heat. The operation efficiency of the condensation heat utilizer is determined by its structure and operation conditions of the boiler and the heating system. The software was developed for the thermal design of the condensation heat utilizer equipped by the humidifier. Computation investigations of its operation are carried out as a function of various operation parameters of the boiler and the heating system (temperature of the return delivery water and smoke fumes, air excess, air temperature at the inlet and outlet of the condensation heat utilizer, heating and humidifying of air in the humidifier, and portion of the circulating water). The heat recuperation efficiency is estimated for various operation conditions of the boiler and the condensation heat utilizer. Recommendations on the most effective application of the condensation heat utilizer are developed.
Directory of Open Access Journals (Sweden)
Davide Biagini
2011-04-01
Full Text Available To evaluate the effect of sexual neutering and age of castration on empty body weight (EBW components and estimated nitrogen excretion and efficiency, a trial was carried out on 3 groups of double-muscled Piemontese calves: early castrated (EC, 5th month of age, late castrated (LC, 12th month of age and intact males (IM, control group. Animals were fed at the same energy and protein level and slaughtered at 18th month of age. Live and slaughtering performances and EBW components were recorded, whereas N excretion was calculated by difference between diet and weight gain N content. In live and slaughtering performances, IM showed higher final, carcass and total meat weight than EC and LC (P<0.01. In EBW components, IM showed higher blood and head weight than EC and LC (P<0.01 and 0.05 respectively, and differences were found between EC and LC for head weights (P<0.01. IM showed higher body crude protein (BCP than EC and LC (P<0.01 and 0.05 respectively, but BCP/EBW ratio was higher only in IM than EC (P<0.05. Estimated N daily gain was higher in IM than EC and LC (P<0.01. Only LC showed higher excretion than IM (P<0.05, and N efficiency was higher in IM than EC and LC (P<0.05 and 0.01 respectively. In conclusion, for the Piemontese hypertrophied cattle castration significantly increases N excretion (+7% and reduces N efficiency (-15%, leading to a lower level of sustainability.
Directory of Open Access Journals (Sweden)
Constantin E Uhlig
Full Text Available AIMS: To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts. METHODS: Invitations to participate in a survey were distributed via e-mail to employees of two university hospitals (E1 and E2 and to members of a medical association (E3, as a link placed in a special text on the municipal homepage regularly read by the administrative employees of two cities (H1 and H2, and paper-based to workers at an automobile enterprise (P1 and college (P2 and senior (P3 students. The main parameters analyzed included the numbers of invited and actual participants, and the time and cost to complete the survey. Statistical analysis was descriptive, except for the Kruskal-Wallis-H-test, which was used to compare the three recruitment methods. Cost efficiencies were compared and extrapolated to different-sized cohorts. RESULTS: The ratios of completely answered questionnaires to distributed questionnaires were between 81.5% (E1 and 97.4% (P2. Between 6.4% (P1 and 57.0% (P2 of the invited participants completely answered the questionnaires. The costs per completely answered questionnaire were $0.57-$1.41 (E1-3, $1.70 and $0.80 for H1 and H2, respectively, and $3.36-$4.21 (P1-3. Based on our results, electronic surveys with 10, 20, 30, or 42 questions would be estimated to be most cost (and time efficient if more than 101.6-225.9 (128.2-391.7, 139.8-229.2 (93.8-193.6, 165.8-230.6 (68.7-115.7, or 188.2-231.5 (44.4-72.7 participants were required, respectively. CONCLUSIONS: The study efficiency depended on the technical modalities of the survey methods and engagement of the participants. Depending on our study design, our results suggest that in similar projects that will certainly have more than two to three hundred required participants, the most efficient way of conducting a questionnaire-based survey is likely via the Internet with a digital questionnaire
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Nikitidou, E.; Kazantzidis, A.; De Bock, V.; De Backer, H.
2013-04-01
The measurements of aerosol optical depth, total ozone and UV irradiance from a Brewer spectrophotometer located at Uccle, Belgium, were used to estimate, for the first time at a typical site in Western Europe, the aerosol radiative forcing efficiency (the forcing performed per unit of aerosol optical depth). The study was performed at selected solar zenith angles during the period July 2006-May 2010. In the 300-360 nm spectral region, the highest values were revealed at 30° (-6.9 ± 0.9 W m-2), while at 60° the RFE was almost 2.5 times lower (-2.7 ± 0.1 W m-2). In the UV-B region (300-315 nm), the RFE value at 60° (-0.069 ± 0.005 W m-2) was 5 times lower than the corresponding value at 30° (-0.35 ± 0.04 W m-2). Extending previous studies for the estimation of aerosol single scattering albedo in UV-A wavelengths down to 340 nm, an attempt was made, taking advantage of the Brewer measurements, to provide estimates at low UV-A wavelengths and in the UV-B region. The estimated monthly averages of the Brewer single scattering albedo at 320 nm are in very close agreement (within ±0.01) with measurements at 440 nm from a collocated CIMEL sunphotometer. Due to increased measurement uncertainties and the effect of ozone absorption, large differences between the two instruments were found at 306.5 nm. For the rest of wavelengths, average differences up to 0.03 were revealed.
Estimation of efficiency project management
Directory of Open Access Journals (Sweden)
Novotorov Vladimir Yurevich
2011-03-01
Full Text Available In modern conditions, the effectiveness of the enterprises all in a greater degree depends on methods of management and business dealing forms. The organizations should choose the most effective for themselves strategy of management taking into account the existing legislation, concrete conditions of activity, financial and economic, investment potential and development strategy. Introduction of common system of planning and realization of strategy of the organization, it will allow to provide even development and long-term social and economic growth of the companies.
Directory of Open Access Journals (Sweden)
Ashok Sahai
2016-02-01
Full Text Available This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30. The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’. Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091
On asympotic behavior of solutions to several classes of discrete dynamical systems
Institute of Scientific and Technical Information of China (English)
LIAO; Xiaoxin(廖晓昕)
2002-01-01
In this paper, a new complete and simplified proof for the Husainov-Nikiforova Theorem is given. Then this theorem is generalized to the case where the coefficients may have different signs as well as nonlinear systems. By these results, the robust stability and the bound for robustness for high-order interval discrete dynamical systems are studied, which can be applied to designing stable discrete control system as well as stabilizing a given unstable control system.
Maruyama, A.; Kuwagata, T.
2010-12-01
The effect of changes in the growing season for rice, which are agronomical adaptation to climate change, on water use efficiency (WUE) was estimated using a coupled land surface and crop growth model. The crop growth model consisted of calculations of phenological development (Ps), growth of leaf area index (LAI) and canopy height (h). The land surface model consisted of calculations of energy budget on the surface, radiation transport and stomatal movements using the output of the crop growth model (Ps, LAI, h). An empirical relationship between stomatal conductance and phenological stage was used for this calculation. The relationship between leaf geometry and phenological stage was also used to express the change in radiation transport in the canopy. Variations in evapotranspiration (ET) were estimated using the coupled model for five different transplanting times (from March to July) based on climatic data for the Miyazaki Plain, Japan. The seasonal variation in ET showed a common pattern, where most of the ET just after transplanting consisted of evaporation (E), and the transpiration (T) increased with rice growth until heading. However, the timing of the increase in T varied with the growing seasons due to the difference of LAI growth rate. The ratios of total transpiration to evapotranspiration (T/ET) were 40, 48, 48, 46 and 36% for transplanting on March 1st, April 1st, May 1st, June 1st and July 1st, respectively. Assuming the amount of production by photosynthesis is proportional to transpiration; our results suggest that the WUE would be higher at mid growing season.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Energy Technology Data Exchange (ETDEWEB)
Mukhopadhyay, Nitai D. [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA 23298 (United States); Sampson, Andrew J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Deniz, Daniel; Alm Carlsson, Gudrun [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Williamson, Jeffrey [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Malusek, Alexandr, E-mail: malusek@ujf.cas.cz [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Department of Radiation Dosimetry, Nuclear Physics Institute AS CR v.v.i., Na Truhlarce 39/64, 180 86 Prague (Czech Republic)
2012-01-15
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François
2016-10-01
Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance.
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Dubrovskaya, Ekaterina; Turkovskaya, Olga
2010-05-01
Estimation of the efficiency of hydrocarbon mineralization in soil by measuring CO2-emission and variations in the isotope composition of carbon dioxide E. Dubrovskaya1, O. Turkovskaya1, A. Tiunov2, N. Pozdnyakova1, A. Muratova1 1 - Institute of Biochemistry and Physiology of Plants and Microorganisms, RAS, Saratov, 2 - A.N. Severtsov Institute of Ecology and Evolution, RAS, Moscow, Russian Federation Hydrocarbon mineralization in soil undergoing phytoremediation was investigated in a laboratory experiment by estimating the variation in the 13С/12С ratio in the respired СО2. Hexadecane (HD) was used as a model hydrocarbon pollutant. The polluted soil was planted with winter rye (Secale cereale) inoculated with Azospirillum brasilense strain SR80, which combines the abilities to promote plant growth and to degrade oil hydrocarbon. Each vegetated treatment was accompanied with a corresponding nonvegetated one, and uncontaminated treatments were used as controls. Emission of carbon dioxide, its isotopic composition, and the residual concentration of HD in the soil were examined after two and four weeks. At the beginning of the experiment, the CO2-emission level was higher in the uncontaminated than in the contaminated soil. After two weeks, the quantity of emitted carbon dioxide decreased by about three times and did not change significantly in all uncontaminated treatments. The presence of HD in the soil initially increased CO2 emission, but later the respiration was reduced. During the first two weeks, nonvegetated soil had the highest CO2-emission level. Subsequently, the maximum increase in respiration was recorded in the vegetated contaminated treatments. The isotope composition of plant material determines the isotope composition of soil. The soil used in our experiment had an isotopic signature typical of soils formed by C3 plants (δ13C,-22.4‰). Generally, there was no significant fractionation of the carbon isotopes of the substrates metabolized by the
Sannigrahi, Srikanta; Sen, Somnath; Paul, Saikat
2016-04-01
Net Primary Production (NPP) of mangrove ecosystem and its capacity to sequester carbon from the atmosphere may be used to quantify the regulatory ecosystem services. Three major group of parameters has been set up as BioClimatic Parameters (BCP): (Photosynthetically Active Radiation (PAR), Absorbed PAR (APAR), Fraction of PAR (FPAR), Photochemical Reflectance Index (PRI), Light Use Efficiency (LUE)), BioPhysical Parameters (BPP) :(Normalize Difference Vegetation Index (NDVI), scaled NDVI, Enhanced Vegetation Index (EVI), scaled EVI, Optimised and Modified Soil Adjusted Vegetation Index (OSAVI, MSAVI), Leaf Area Index (LAI)), and Environmental Limiting Parameters (ELP) (Temperature Stress (TS), Land Surface Water Index (LSWI), Normalize Soil Water Index (NSWI), Water Stress Scalar (WS), Inversed WS (iWS) Land Surface Temperature (LST), scaled LST, Vapor Pressure Deficit (VPD), scaled VPD, and Soil Water Deficit Index (SWDI)). Several LUE models namely Carnegie Ames Stanford Approach (CASA), Eddy Covariance - LUE (EC-LUE), Global Production Efficiency Model (GloPEM), Vegetation Photosynthesis Model (VPM), MOD NPP model, Temperature and Greenness Model (TG), Greenness and Radiation model (GR) and MOD17 was adopted in this study to assess the spatiotemporal nature of carbon fluxes. Above and Below Ground Biomass (AGB & BGB) was calculated using field based estimation of OSAVI and NDVI. Microclimatic zonation has been set up to assess the impact of coastal climate on environmental limiting factors. MODerate Resolution Imaging Spectroradiometer (MODIS) based yearly Gross Primary Production (GPP) and NPP product MOD17 was also tested with LUE based results with standard model validation statistics: Root Mean Square of Error (RMSE), Mean Absolute Error (MEA), Bias, Coefficient of Variation (CV) and Coefficient of Determination (R2). The performance of CASA NPP was tested with the ground based NPP with R2 = 0.89 RMSE = 3.28 P = 0.01. Among the all adopted models, EC
Institute of Scientific and Technical Information of China (English)
农秀丽; 李玲玲
2012-01-01
对非齐次约束线性回归模型的狭义条件根方估计和广义条件根方估计进行讨论.利用相对效率定义比较两种根方估计的效率,证明在一定条件下,广义条件根方估计的效率不低于狭义条件根方估计,在根方参数的限制下比较了它们的下界之间的关系,从而可选择适当的根方参数,使广义条件根方估计就均方误差而言更具有良好的性质.%This paper discloses the generalized conditional root square estimation and the narrow sense conditional root square estimation that the relative efficiency has in inhomogeneous equality restricted linear model. It is shown that the generalized conditional root squares estimation has no smaller the relative efficiency than the narrow sense conditional root square estimation. By a constraint condition in root squares parameter, we compare bounds of them, thus, choose appropriate squares parameter, the generalized conditional root square estimation has good nature on terms mean squares error.
Energy Technology Data Exchange (ETDEWEB)
Gonzales, John
2015-04-02
Presentation by Senior Engineer John Gonzales on Evaluating Investments in Natural Gas Vehicles and Infrastructure for Your Fleet using the Vehicle Infrastructure Cash-flow Estimation (VICE) 2.0 model.
National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...
Monson, D. J.
1978-01-01
Based on expected advances in technology, the maximum system efficiency and minimum specific mass have been calculated for closed-cycle CO and CO2 electric-discharge lasers (EDL's) and a direct solar-pumped laser in space. The efficiency calculations take into account losses from excitation gas heating, ducting frictional and turning losses, and the compressor efficiency. The mass calculations include the power source, radiator, compressor, fluids, ducting, laser channel, optics, and heat exchanger for all of the systems; and in addition the power conditioner for the EDL's and a focusing mirror for the solar-pumped laser. The results show the major component masses in each system, show which is the lightest system, and provide the necessary criteria for solar-pumped lasers to be lighter than the EDL's. Finally, the masses are compared with results from other studies for a closed-cycle CO2 gasdynamic laser (GDL) and the proposed microwave satellite solar power station (SSPS).
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
Institute of Scientific and Technical Information of China (English)
张英冕
2012-01-01
Proper evaluation method of the marketing costs is the basis to enhance marketing cost efficiency. This article starts from analyzing the effect factors of the marketing cost efficiency, researches the assessment methods of marketing cost efficiency, and puts forward marketing cost control recommendations. Hope to provide valuable references to telecommunication enterl3rises' cost allocatinn and use.%正确地评价营销成本的使用状况是提升营销成本使用效率的基础，文章以此为出发点，从分析影响营销成本使用效率的因素入手，研究营销成本使用效率评估的方法，提出营销成本管控的建议，希望能对电信企业营销成本的配置和使用提供参考。
El Gharamti, Mohamad
2016-11-15
This study considers the assimilation problem of subsurface contaminants at the port of Rotterdam in the Netherlands. It involves the estimation of solute concentrations and biodegradation rates of four different chlorinated solvents. We focus on assessing the efficiency of an adaptive hybrid ensemble Kalman filter and optimal interpolation (EnKF-OI) and the exact second-order sampling formulation (EnKFESOS) for mitigating the undersampling of the estimation and observation errors covariances, respectively. A multi-dimensional and multi-species reactive transport model is coupled to simulate the migration of contaminants within a Pleistocene aquifer layer located around 25 m below mean sea level. The biodegradation chain of chlorinated hydrocarbons starting from tetrachloroethene and ending with vinyl chloride is modeled under anaerobic environmental conditions for 5 decades. Yearly pseudo-concentration data are used to condition the forecast concentration and degradation rates in the presence of model and observational errors. Assimilation results demonstrate the robustness of the hybrid EnKF-OI, for accurately calibrating the uncertain biodegradation rates. When implemented serially, the adaptive hybrid EnKF-OI scheme efficiently adjusts the weights of the involved covariances for each individual measurement. The EnKFESOS is shown to maintain the parameter ensemble spread much better leading to more robust estimates of the states and parameters. On average, a well tuned hybrid EnKF-OI and the EnKFESOS respectively suggest around 48 and 21 % improved concentration estimates, as well as around 70 and 23 % improved anaerobic degradation rates, over the standard EnKF. Incorporating large uncertainties in the flow model degrades the accuracy of the estimates of all schemes. Given that the performance of the hybrid EnKF-OI depends on the quality of the background statistics, satisfactory results were obtained only when the uncertainty imposed on the background
Gharamti, Mohamad E.; Valstar, Johan; Janssen, Gijs; Marsman, Annemieke; Hoteit, Ibrahim
2016-11-01
This study considers the assimilation problem of subsurface contaminants at the port of Rotterdam in the Netherlands. It involves the estimation of solute concentrations and biodegradation rates of four different chlorinated solvents. We focus on assessing the efficiency of an adaptive hybrid ensemble Kalman filter and optimal interpolation (EnKF-OI) and the exact second-order sampling formulation (EnKFESOS) for mitigating the undersampling of the estimation and observation errors covariances, respectively. A multi-dimensional and multi-species reactive transport model is coupled to simulate the migration of contaminants within a Pleistocene aquifer layer located around 25 m below mean sea level. The biodegradation chain of chlorinated hydrocarbons starting from tetrachloroethene and ending with vinyl chloride is modeled under anaerobic environmental conditions for 5 decades. Yearly pseudo-concentration data are used to condition the forecast concentration and degradation rates in the presence of model and observational errors. Assimilation results demonstrate the robustness of the hybrid EnKF-OI, for accurately calibrating the uncertain biodegradation rates. When implemented serially, the adaptive hybrid EnKF-OI scheme efficiently adjusts the weights of the involved covariances for each individual measurement. The EnKFESOS is shown to maintain the parameter ensemble spread much better leading to more robust estimates of the states and parameters. On average, a well tuned hybrid EnKF-OI and the EnKFESOS respectively suggest around 48 and 21 % improved concentration estimates, as well as around 70 and 23 % improved anaerobic degradation rates, over the standard EnKF. Incorporating large uncertainties in the flow model degrades the accuracy of the estimates of all schemes. Given that the performance of the hybrid EnKF-OI depends on the quality of the background statistics, satisfactory results were obtained only when the uncertainty imposed on the background
Directory of Open Access Journals (Sweden)
Gonzalo González-Rey
2013-05-01
Full Text Available En el trabajo se propone un procedimiento general para estimar la eficiencia de engranajes de tornillo sinfín cilíndrico considerando pérdidas de potencia por fricción entre los flancos conjugados. El referido procedimiento tiene sus bases en dos modelos matemáticos desarrollados con relaciones teóricas y empíricas presentes en el Reporte Técnico ISO 14521. Los modelos matemáticos elaborados son orientados a evaluar la eficiencia de engranajes de tornillo sinfín cilíndrico en función de la geometría del engranaje, de condiciones de la aplicación y características de fabricación del tornillo y la rueda dentada. El procedimiento fue validado por comparación con valores de eficiencia reportados para unidades deengranajes fabricadas por una compañía especializada en engranajes. Finalmente, haciendo uso del referido procedimiento son establecidas soluciones al problema de mejorar la eficiencia de estos engranajes mediante la recomendación racional de parámetros geométricos y de explotación.Palabras claves: eficiencia, engranaje de tornillo sinfín, diseño racional, modelo matemático, ISO/TR 14521._______________________________________________________________________________AbstractIn this study, a general procedure is proposed for the prediction of cylindrical worm gear efficiency taking into account friction losses between worm and wheel gear. The procedure is based in two mathematical models developed with empiric relations and theoretical formulas presented on ISO/TR 14521. Mathematical models are oriented to evaluate the worm gear efficiency with interrelation of gear geometry, manufacturing and working parameters. The validation of procedure was achieved by comparing with values of efficiency for worm gear units referenced by a German gear manufacturer company. Finally, some important recommendations to increase worm gear efficiency by means of rational gear geometry and application parameters are presented.Key words
Titos, G.; Foyo-Moreno, I.; Lyamani, H.; Querol, X.; Alastuey, A.; Alados-Arboledas, L.
2012-02-01
We investigated aerosol optical properties, mass concentration and chemical composition over a 1 year period (from March 2006 to February 2007) at an urban site in Southern Spain (Granada, 37.18°N, 3.58°W, 680 m above sea level). Light-scattering and absorption measurements were performed using an integrating nephelometer and a MultiAngle Absorption Photometer (MAAP), respectively, with no aerosol size cut-off and without any conditioning of the sampled air. PM10 and PM1 (ambient air levels of atmospheric particulate matter finer than 10 and 1 microns) were collected with two high volume samplers, and the chemical composition was investigated for all samples. Relative humidity (RH) within the nephelometer was below 50% and the weighting of the filters was also at RH of 50%. PM10 and PM1 mass concentrations showed a mean value of 44 ± 19 μg/m3 and 15 ± 7 μg/m3, respectively. The mineral matter was the major constituent of the PM10-1 fraction (contributing more than 58%) whereas organic matter and elemental carbon (OM+EC) contributed the most to the PM1 fraction (around 43%). The absorption coefficient at 550 nm showed a mean value of 24 ± 9 Mm-1 and the scattering coefficient at 550 nm presented a mean value of 61 ± 25 Mm-1, typical of urban areas. Both the scattering and the absorption coefficients exhibited the highest values during winter and the lowest during summer, due to the increase in the anthropogenic contribution and the lower development of the convective mixing layer during winter. A very low mean value of the single scattering albedo of 0.71 ± 0.07 at 550 nm was calculated, suggesting that urban aerosols in this site contain a large fraction of absorbing material. Mass scattering and absorption efficiencies of PM10 particles exhibited larger values during winter and lower during summer, showing a similar trend to PM1 and opposite to PM10-1. This seasonality is therefore influenced by the variations on PM composition. In addition, the mass
Oda, Akinori; Sugawara, Hirotake; Sakai, Yosuke; Akashi, Haruaki
2000-06-01
Xe dielectric barrier discharges at different gap lengths under applied pulse voltages with trapezoidal and sinusoidal waveforms were simulated using a self-consistent one-dimensional fluid model. In both waveforms, the light output power depended not only on the amplitude of voltage waveforms but also on the discharge gap length. At the narrower discharge gap, the light output efficiency was improved by increasing the time gradient of the applied voltage when the trapezoidal pulse is applied, and by decreasing the duty ratio in the sinusoidal case. In the present simulation, we adopted a fast numerical method for calculation of electric field introducing an exact expression of the discharge current.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez Arzola, K. [Department of Microbiology and Cell Biology, Faculty of Pharmacy, University of La Laguna, 38206 La Laguna, Tenerife (Spain); Arevalo, M.C. [Department of Physical Chemistry, Faculty of Chemistry, University of La Laguna, 38206 La Laguna, Tenerife (Spain)], E-mail: carevalo@ull.es; Falcon, M.A. [Department of Microbiology and Cell Biology, Faculty of Pharmacy, University of La Laguna, 38206 La Laguna, Tenerife (Spain)], E-mail: mafalcon@ull.es
2009-03-30
The electrochemical properties of eighteen natural and synthetic compounds commonly used to expand the oxidative capacity of laccases were evaluated in an aqueous buffered medium using cyclic voltammetry. This clarifies which compounds fulfil the requisites to be considered as redox mediators or enhancers. Cyclic voltammetry was also applied as a rapid way to assess the catalytic efficiency (CE) of those compounds which oxidise a non-phenolic lignin model (veratryl alcohol, VA) and a kraft lignin (KL). With the exception of gallic acid and catechol, all assayed compounds were capable of oxidising VA with varying CE. However, only some of them were able to oxidise KL. Although the oxidised forms of HBT and acetovanillone were not electrochemically stable, their reduced forms were quickly regenerated in the presence of VA. They thus act as chemical catalysts. Importantly, HBT and HPI did not attack the KL via the same mechanism as in VA oxidation. Electrochemical evidence suggests that violuric acid oxidises both substrates by an electron transfer mechanism, unlike the other N-OH compounds HBT and HPI. Acetovanillone was found to be efficient in oxidising VA and KL, even better than the synthetic mediators TEMPO, violuric acid or ABTS. Most of the compounds produced a generalised increase in the oxidative charge of KL, probably attributed to chain reactions arising between the phenolic and non-phenolic components of this complex molecule.
Directory of Open Access Journals (Sweden)
T.J. Akingbade
2014-09-01
Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.
Gjerlaug-Enger, E; Kongsro, J; Odegård, J; Aass, L; Vangen, O
2012-01-01
In this study, computed tomography (CT) technology was used to measure body composition on live pigs for breeding purposes. Norwegian Landrace (L; n = 3835) and Duroc (D; n = 3139) boars, selection candidates to be elite boars in a breeding programme, were CT-scanned between August 2008 and August 2010 as part of an ongoing testing programme at Norsvin's boar test station. Genetic parameters in the growth rate of muscle (MG), carcass fat (FG), bone (BG) and non-carcass tissue (NCG), from birth to ∼100 kg live weight, were calculated from CT data. Genetic correlations between growth of different body tissues scanned using CT, lean meat percentage (LMP) calculated from CT and more traditional production traits such as the average daily gain (ADG) from birth to 25 kg (ADG1), the ADG from 25 kg to 100 kg (ADG2) and the feed conversion ratio (FCR) from 25 kg to 100 kg were also estimated from data on the same boars. Genetic parameters were estimated based on multi-trait animal models using the average information-restricted maximum likelihood (AI-REML) methodology. The heritability estimates (s.e. = 0.04 to 0.05) for the various traits for Landrace and Duroc were as follows: MG (0.19 and 0.43), FG (0.53 and 0.59), BG (0.37 and 0.58), NCG (0.38 and 0.50), LMP (0.50 and 0.57), ADG1 (0.25 and 0.48), ADG2 (0.41 and 0.42) and FCR (0.29 and 0.42). Genetic correlations for MG with LMP were 0.55 and 0.68, and genetic correlations between MG and ADG2 were -0.06 and 0.07 for Landrace and Duroc, respectively. LMP and ADG2 were clearly unfavourably genetically correlated (L: -0.75 and D: -0.54). These results showed the difficulty in jointly improving LMP and ADG2. ADG2 was unfavourably correlated with FG (L: 0.84 and D: 0.72), thus indicating to a large extent that selection for increased growth implies selection for fatness under an ad libitum feeding regime. Selection for MG is not expected to increase ADG2, but will yield faster growth of the desired tissues and a better
Chono, Sumio; Tanino, Tomoharu; Seki, Toshinobu; Morimoto, Kazuhiro
2008-10-01
The efficacy of pulmonary administration of liposomal ciprofloxacin (CPFX) in pneumonia was evaluated. In brief, the pharmacokinetics following pulmonary administration of liposomal CPFX (particle size, 1,000 nm; dose, 200 microg/kg) were examined in rats with lipopolysaccharide-induced pneumonia as an experimental pneumonia model. Furthermore, the antibacterial effects of liposomal CPFX against the pneumonic causative organisms were estimated by pharmacokinetic/pharmacodynamic (PK/PD) analysis. The time-courses of the concentration of CPFX in alveolar macrophages (AMs) and lung epithelial lining fluid (ELF) following pulmonary administration of liposomal CPFX to rats with pneumonia were markedly higher than that following the administration of free CPFX (200 microg/kg). The time course of the concentrations of CPFX in plasma following pulmonary administration of liposomal CPFX was markedly lower than that in AMs and ELF. These results indicate that pulmonary administration of liposomal CPFX was more effective in delivering CPFX to AMs and ELF compared with free CPFX, and it avoids distribution of CPFX to the blood. According to PK/PD analysis, the liposomal CPFX exhibited potent antibacterial effects against the causative organisms of pneumonia. This study indicates that pulmonary administration of CPFX could be an effective technique for the treatment of pneumonia.
Rakhmatullina, E M; Sanam'ian, M F
2007-05-01
Cytogenetic analysis of M2 plants after irradiation of cotton by thermal neutrons was performed in 56 families. In 40 plants of 27 M2 families, different abnormalities of chromosome pairing were found. These abnormalities were caused by primary monosomy, chromosomal interchange, and desynapsis. The presence of chromosome aberrations in some cases decreased meiotic index and pollen fertility. Comparison of the results of cytogenetics analysis, performed in M1 and M2 after irradiation, showed a nearly two-fold decrease in the number of plants with chromosomal aberrations in M2, as well as narrowing of the spectrum of these aberrations. The latter result is explained by the fact that some mutations are impossible to detect in subsequent generations because of complete or partial sterility of aberrant M1 plants. It was established that the most efficient radiation doses for inducing chromosomal aberrations in the present study were 15 and 25 Gy, since they affected survival and fertility of altered plant to a lesser extent.
Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew
2009-01-01
Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.
Multi-directional program efficiency
DEFF Research Database (Denmark)
Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth
2016-01-01
approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....
Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee
2016-04-01
In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset.
Directory of Open Access Journals (Sweden)
O. N. Korsun
2014-01-01
Full Text Available High information load of crew is one of the main problems of modern piloted aircraft therefore researches on approving data representation form, especially in critical situations are a challenge. The article considers one of opportunities to improve the interface of a modern pilot's cabin i.e. to use a spatial sound (3D - audio technology. The 3D - audio is a technology, which recreates a spatially directed sound in earphones or via loudspeakers. Spatial audio-helps, which together with information on danger will specify also the direction from which it proceeds, can reduce time of response to an event and, therefore, increase situational safety of flight. It is supposed that helps will be provided through pilot's headset therefore technology realization via earphones is discussed.Now the main hypothesis explaining the human ability to recognize the position of a sound source in space, asserts that the human estimates distortion of a sound signal spectrum at interaction with the head and an auricle depending on an arrangement of the sound source. For exact describing the signal spectrum variations there are such concepts as Head Related Impulse Response (HRIR and Head Related Transfer Function (HRTF. HRIR is measured in humans or dummies. At present the most full-scale public HRIR library is CIPIC HRTF Database of CIPIC Interface Laboratory at UC Davis.To have 3D audio effect, it is necessary to simulate a mono-signal conversion through the linear digital filters with anthropodependent pulse characteristics (HRIR for the left and right ear, which correspond to the chosen direction. Results should be united in a stereo file and applied for reproduction to the earphones.This scheme was realized in Matlab, and the received software was used for experiments to estimate the quantitative characteristics of technology. For processing and subsequent experiments the following sound signals were chosen: a fragment of the classical music piece "Polovetsky
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...
Institute of Scientific and Technical Information of China (English)
石鸟云; 周星
2012-01-01
This paper uses SFA model to estimate the technical efficiency of 15 Chinese power enterprises, based on the data from 2003 to 2009. The study reveals that the technical efficiency of 15 Chinese power enterprises has been increasing every year, while the speed of increasing has been slowing down year by year. The main restraint factors include technology and management efficiency. The insignificant differences on technical efficiency among Chinese power enterprises weaken their motion of innovation. The factors that affect the technical efficiency of Chinese power enterprises include internal transaction cost, human capital investment and specific capital investment. What affect internal transaction cost consist of organization structure, work flow and incentive mechanism. The effect of human capital investment can be explained from the point of view of internal synergy effects and coordination cost.%利用SFA模型对我国15家电力企业2003-2009年的技术效率进行评价.研究发现:我国电力企业技术效率呈逐年递增的趋势,但增长速度逐年下降,技术水平和管理效率是制约技术效率提升的两大因素.电力企业之间技术效率差异较小,弱化了企业创新的动力.影响电力企业技术效率的因素主要包括内部交易成本、人力资本投资和专用性资本投资,其中影响我国电力企业内部交易成本的因素包括组织结构、业务流程和员工激励机制,人力资本投资对技术效率的影响可以从内部协同效应和协调成本的角度来解释.
Virtual Sensors: Efficiently Estimating Missing Spectra
National Aeronautics and Space Administration — Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding...
Efficiency in Microfinance Cooperatives
Directory of Open Access Journals (Sweden)
HARTARSKA, Valentina
2012-12-01
Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.
基于噪声子空间映射的二维波达角快速估计算法%Efficient Algorithm for 2-D DOA Estimation Based on Noise Subspace Mapping
Institute of Scientific and Technical Information of China (English)
王军; 闫锋刚; 金铭; 乔晓林
2015-01-01
为了降低二维MUSIC （Two Dimensional Multiple Signal Classification ，2-D MUSIC ）算法的计算量，提高算法的实时处理能力，基于噪声子空间映射思想提出了一种适用于任意平面阵列结构的二维波达角（Direction Of Arrival ， DOA）快速估计算法。新算法利用空间角度划分及非线性变换将信号子空间与噪声子空间的正交性等价地压缩至某个角度分片内，使得真实DOA在该角度分片内产生虚拟镜像，通过搜索该角度分片得到虚拟DOA，最后利用数学式直接计算得到真实DOA 。理论分析和实验结果表明新算法能够成倍地提高DOA估计的速度，同时具有比MUSIC算法更高的空间分辨率。%To reduce the computational complexity of the two dimensional Multiple Signal Classification (2-D MUSIC) al-gorithm and make it suitable for real-time applications ,this paper presents a new computationally efficient method for 2-D direction-of-arrivals (DOA) estimation with arbitrary 2-D array configurations based on noise-subspace mapping .Exploring the idea of spatial angle dividing and non-linear transformation ,the orthogonal relationship between the signal-subspaces and noise-subspaces is com-pressed to a small angular sector ,leading to a series of virtual mirrors for each true DOA in a given sector .This allows fast estima-tion for the virtual DOAs by spectral search over only one sector ,which further gives the value of the true DOAs since they are mathematically related .It is shown by theoretical analysis as well as experimental results that the new approach has a much lower computational complexity and an improved resolution as compared to the standard MUSIC .
Energy Technology Data Exchange (ETDEWEB)
Asociacion de Tecnicos y Profesionistas en Aplicacion Energetica, A.C. [Mexico (Mexico)
2002-06-01
In the last years much attention has been given to the polluting gas discharges, in special of those that favor the green house effect (GHE), due to the negative sequels that its concentration causes to the atmosphere, particularly as the cause of the increase in the overall temperature of the planet, which has been denominated world-wide climatic change. There are many activities that allow to lessen or to elude the GHE gas emissions, and with the main ones the so-called projects of Energy Efficiency and Renewable Energy (EE/RE) have been structured. In order to carry out a project within the frame of the MDL, it is necessary to evaluate with quality, precision and transparency, the amount of emissions of GHE gases that are reduced or suppressed thanks to their application. For that reason, in our country we tried different methodologies directed to estimate the CO{sub 2} emissions that are attenuated or eliminated by means of the application of EE/RE projects. [Spanish] En los ultimos anos se ha puesto mucha atencion a las emisiones de gases contaminantes, en especial de los que favorecen el efecto invernadero (GEI), debido a las secuelas negativas que su concentracion ocasiona a la atmosfera, particularmente como causante del aumento en la temperatura general del planeta, en lo que se ha denominado cambio climatico mundial. Existen muchas actividades que permiten aminorar o eludir las emisiones de GEI, y con las principales se han estructurado los llamados proyectos de eficiencia energetica y energia renovables (EE/ER). Para llevar a cabo un proyecto dentro del marco del MDL, es necesario evaluar con calidad, precision y transparencia, la cantidad de emisiones de GEI que se reducen o suprimen gracias a su aplicacion. Por ello, en nuestro pais ensayamos diferentes metodologias encaminadas a estimar las emisiones de CO{sub 2} que se atenuan o eliminan mediante la aplicacion de proyectos de EE/ER.
Parameters estimation in quantum optics
D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.
2000-01-01
We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.
Directory of Open Access Journals (Sweden)
Francisco Cobos
2007-01-01
Full Text Available OSIRIS, the main optical (360-1000nm 1st- generation instrument for GTC, is being inte- grated. Except for some grisms and filters, all main optical components are finished and be- ing characterized. Complementing laboratory data with semi-empirical estimations, the cur- rent OSIRIS efficiency is summarized.
Energy Technology Data Exchange (ETDEWEB)
Perez-Comas, Jose A.; Skalski, John R. (University of Washington, School of Fisheries, Seattle, WA)
2000-07-01
In the advent of the installation of a PIT-tag interrogation system in the Cascades Island fish ladder at Bonneville Dam, this report provides guidance on the anticipated precision of salmonid estuarine and marine survival estimates, for various levels of system-wide adult detection probability at Bonneville Dam. Precision was characterized by the standard error of the survival estimates and the coefficient of variation of the survival estimates. The anticipated precision of salmonid estuarine and marine survival estimates was directly proportional to the number of PIT-tagged smolts released and to the system-wide adult detection efficiency at Bonneville Dam, as well as to the in-river juvenile survival above Lower Granite Dam. Moreover, for a given release size and system-wide adult detection efficiency, higher estuarine and marine survivals did also produce more precise survival estimates. With a system-wide detection probability of P{sub BA} = 1 at Bonneville Dam, the anticipated CVs for the estuarine and marine survival ranged between 41 and 88% with release sizes of 10,000 smolts. Only with the 55,000 smolts being released from sites close to Lower Granite Dam and under high estuarine and marine survival, could CVs of 20% be attained with system detection efficiencies of less than perfect detection (i.e., P{sub BA} < 1).
Efficient Quantum Key Distribution
Ardehali, M; Chau, H F; Lo, H K
1998-01-01
We devise a simple modification that essentially doubles the efficiency of a well-known quantum key distribution scheme proposed by Bennett and Brassard (BB84). Our scheme assigns significantly different probabilities for the different polarization bases during both transmission and reception to reduce the fraction of discarded data. The actual probabilities used in the scheme are announced in public. As the number of transmitted signals increases, the efficiency of our scheme can be made to approach 100%. The security of our scheme (against single-photon eavesdropping strategies) is guaranteed by a refined analysis of accepted data which is employed to detect eavesdropping: Instead of lumping all the accepted data together to estimate a single error rate, we separate the accepted data into various subsets according to the basis employed and estimate an error rate for each subset individually. Our scheme is the first quantum key distribution with an efficiency greater than 50%. We remark that our idea is rath...
Energy Technology Data Exchange (ETDEWEB)
Perez-Comas, Joes A.; Skalski, John R. (University of Washington, School of Fisheries, Seattle, WA)
2000-07-01
In the advent of the installation of a PIT-tag interrogation system in the Cascades Island fish ladder at Bonneville Dam, this report provides guidance on the anticipated precision of in-river survival estimates for returning adult salmonids, between Bonneville and Lower Granite dams, for various levels of system-wide adult detection probability at Bonneville Dam. Precision was characterized by the standard error of the survival estimates and the coefficient of variation of the survival estimates. The anticipated precision of in-river survival estimates for returning adult salmonids was directly proportional to the number of PIT-tagged smolts released and to the system-wide adult detection efficiency at Bonneville Dam, as well as to the in-river juvenile survival above Lower Granite Dam. Moreover, for a given release size and system-wide adult detection efficiency at Bonneville Dam, higher estuarine and marine survival rates also produced more precise survival estimates. With a system-wide detection probability of P{sub BA} = 1 at Bonneville Dam, the anticipated CVs for in-river survival estimate ranged between 9.4 and 20% with release sizes of 10,000 smolts. Moreover, if the system-wide adult detection efficiency at Bonneville Dam is less than maximum (i.e., P{sub BA} < 1), precision of CV {le} 20% could still be attained. For example, for releases of 10,000 PIT-tagged fish a CV of 20% in the estimates of in-river survival for returning adult salmon could be reach with system-wide detection probabilities of 0.2 {le} P{sub BA} {le} 0.6, depending on the tagging scenario.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
K. N. Plataniotis
1996-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...
DEFF Research Database (Denmark)
Andersen, Rikke Sand; Vedsted, Peter
2015-01-01
on institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....
Channel estimation in TDD mode
Institute of Scientific and Technical Information of China (English)
ZHANG Yi; GU Jian; YANG Da-cheng
2006-01-01
An efficient solution is proposed in this article for the channel estimation in time division duplex (TDD) mode wireless communication systems. In the proposed solution, the characteristics of fading channels in TDD mode systems are fully exploited to estimate the path delay of the fading channel.The corresponding amplitude is estimated using the minimum mean square error (MMSE) criterion. As a result, it is shown that the proposed novel solution is more accurate and efficient than the traditional solution, and the improvement is beneficial to the performance of Joint Detection.
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
Environmental Efficiency Analysis of China's Vegetable Production
Institute of Scientific and Technical Information of China (English)
TAO ZHANG; BAO-DI XUE
2005-01-01
Objective To analyze and estimate the environmental efficiency of China's vegetable production. Methods The stochastic translog frontier model was used to estimate the technical efficiency of vegetable production. Based on the estimated frontier and technical inefficiency levels, we used the method developed by Reinhard, et al.[1] to estimate the environmental efficiency. Pesticide and chemical fertilizer inputs were treated as environmentally detrimental inputs. Results From estimated results, the mean environmental efficiency for pesticide input was 69.7%, indicating a great potential for reducing pesticide use in China's vegetable production. In addition, substitution and output elasticities for vegetable farms were estimated to provide farmers with helpful information on how to reallocate input resources and improve efficiency. Conclusion There exists a great potential for reducing pesticide use in China's vegetable production.
Estimating Functions and Semiparametric Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
1996-01-01
The thesis is divided in two parts. The first part treats some topics of the estimation theory for semiparametric models in general. There the classic optimality theory is reviewed and exposed in a suitable way for the further developments given after. Further the theory of estimating functions...... contained in this part of the thesis constitutes an original contribution. There can be found the detailed characterization of the class of regular estimating functions, a calculation of efficient regular asymptotic linear estimating sequences (\\ie the classical optimality theory) and a discussion...... of the attainability of the bounds for the concentration of regular asymptotic linear estimating sequences by estimators derived from estimating functions. The main class of models considered in the second part of the thesis (chapter 5) are constructed by assuming that the expectation of a number of given square...
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones....
Schwickerath, U; Uria, C; CERN. Geneva. IT Department
2010-01-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
Energy Technology Data Exchange (ETDEWEB)
Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)
2010-04-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
Energy Technology Data Exchange (ETDEWEB)
Rudin, A.
1995-05-01
This article is a review of utility policy and public opinion related to energy efficiency. The historical background is presented, and the current socioeconomic status is also presented. Many fallacies of past utility policies intended to promote conservation are noted, and it is demonstrated that past policies have not been effective, i.e. the cost of electricity has increased. Given the failure of past practices, fourteen recommendations for future practices are set forth.
DEFF Research Database (Denmark)
Stoustrup, Jakob; Niemann, H.
2002-01-01
This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis pr...... problems can be solved by standard optimization tech-niques. The proposed methods include: (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; (2) FE for systems with parametric faults, and (3) FE for a class of nonlinear systems.......This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis prob-lems are reformulated in the so-called standard problem setup introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...
Golbabaei-Asl, Mona; Knight, Doyle; Anderson, Kellie; Wilkinson, Stephen
2013-01-01
A novel method for determining the thermal efficiency of the SparkJet is proposed. A SparkJet is attached to the end of a pendulum. The motion of the pendulum subsequent to a single spark discharge is measured using a laser displacement sensor. The measured displacement vs time is compared with the predictions of a theoretical perfect gas model to estimate the fraction of the spark discharge energy which results in heating the gas (i.e., increasing the translational-rotational temperature). The results from multiple runs for different capacitances of c = 3, 5, 10, 20, and 40 micro-F show that the thermal efficiency decreases with higher capacitive discharges.
Directory of Open Access Journals (Sweden)
Brundaban Patro
2016-03-01
Full Text Available This paper presents the study of combination tube boilers, as applicable to commercial use, along with the significant features, limitations, and applicability. A heat balance sheet is prepared to know the various heat losses in two different two-pass combination tube boilers, using low grade coal and rice husk as a fuel. Also, the efficiency of the combination tube boilers is studied by the direct and heat loss methods. It is observed that the dry flue gas loss is a major loss in the combination tube boilers. The loss due to the unburnt in the fly ash is very less in the combination tube boilers, due to the surrounded membrane wall. It is also observed that the loss due to the unburnt in the bottom ash has a considerable amount for the heat loss, and cannot be ignored.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computation schemes using combinatorial properties of generating functions. We demonstrate our approach with several case studies involving real world movie recommendation data. The results are comparable with state-of-the-art techniques while also providing probabilistic preference estimates outside the scope of traditional recommender systems.
Power Quality Indices Estimation Platform
Directory of Open Access Journals (Sweden)
Eliana I. Arango-Zuluaga
2013-11-01
Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established.
Joint DOA and DOD Estimation in Bistatic MIMO Radar without Estimating the Number of Targets
Directory of Open Access Journals (Sweden)
Zaifang Xi
2014-01-01
established without prior knowledge of the signal environment. In this paper, an efficient method for joint DOA and DOD estimation in bistatic MIMO radar without estimating the number of targets is presented. The proposed method computes an estimate of the noise subspace using the power of R (POR technique. Then the two-dimensional (2D direction finding problem is decoupled into two successive one-dimensional (1D angle estimation problems by employing the rank reduction (RARE estimator.
Odds Ratios Estimation of Rare Event in Binomial Distribution
Directory of Open Access Journals (Sweden)
Kobkun Raweesawat
2016-01-01
Full Text Available We introduce the new estimator of odds ratios in rare events using Empirical Bayes method in two independent binomial distributions. We compare the proposed estimates of odds ratios with two estimators, modified maximum likelihood estimator (MMLE and modified median unbiased estimator (MMUE, using the Estimated Relative Error (ERE as a criterion of comparison. It is found that the new estimator is more efficient when compared to the other methods.
Technical efficiency of thermoelectric power plants
Energy Technology Data Exchange (ETDEWEB)
Barros, Carlos Pestana [Instituto de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Peypoch, Nicolas [GEREM, LAMPS, IAE, Universite de Perpignan Via Domitia, 52 avenue Paul Alduy, F-66860 Perpignan (France)
2008-11-15
This paper analyses the technical efficiency of Portuguese thermoelectric power generating plants with a two-stage procedure. In the first stage, the plants' relative technical efficiency is estimated with DEA (data envelopment analysis) to establish which plants perform most efficiently. These plants could serve as peers to help improve performance of the least efficient plants. The paper ranks these plants according to their relative efficiency for the period 1996-2004. In a second stage, the Simar and Wilson [Simar, L., Wilson, P.W., 2007. Estimation and inference in two-stage, semi-parametric models of production processes. Journal of Econometrics 136, 1-34] bootstrapped procedure is adopted to estimate the efficiency drivers. Economic implications arising from the study are considered. (author)
Demchuk, Pavlo
Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represent. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo simulation with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches. Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite
Motor-operated gearbox efficiency
Energy Technology Data Exchange (ETDEWEB)
DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.
1996-12-01
Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Institute of Scientific and Technical Information of China (English)
LIAO Yu-Iin; ZHENG Sheng-xian; RONG Xiang-min; LIU Qiang; FAN Mei-rong
2010-01-01
A pot experiment combined with15 N isotope techniques was conducted to evaluate effects of the varying rates of urea.N fertilizer application on yields,quailty,and nitrogen use efficiency(NUE)of pakchoi cabbage(Brassica chinensis L.)and asparagus lettuce(Lactuca saiva L.).15 N-labbled urea(5.35 15 N atom%)was added to pots with 6.5kg soil of 0.14,0.18,0.21,0.25,and 0.29 g N/kg soil.and applied in two splits:60 percenl as basel dressing in the mixture and 40 percent as toodressing.The fresh yields of two vegetable species increased with the increasing input of urea-N,but there was a significant quadratic relationship between the dose of urea-N fertilizer application and the fresh yields.Whan the dosage of urea-N fertilizer reached a certain value,nitrate readily accumulated in the two kinds of plants due to the decrease in NR activity;furthermore,there was a linear nagative correlation between nitrate content and NR activity.With the increasing input of urea-N.ascorbic acid and soluble sugar initially increased,declined after a while,and crude fiber rapidly decreased too.Total absorbed N(TAN).N derived from fertilizer(Ndff),and N derived from soil(Ndfs)increased,and the ratio of Ndff and TAN also increased.but the ratio of Ndfs and TAN as well as NUE of urea-N fertilizer decreased with the increasing input of urea-N.These results suggested that the increasing application of labeled N fertilizer led lo the increase in unlabeled N(namely,Ndfs)presumably due to"added nitrogen interaction"(ANI),the decease in NUE of urea-N fertilizer may be due to excess fertilization beyond the levels of plant requirements and the ANI.and the decrease jn the two vege table yields with the increasing addition of urea-N possibly because the excess accumulation of nitrate reached a toxic level.
Institute of Scientific and Technical Information of China (English)
王昕天
2014-01-01
目的：为了更好的支持我国卫生资源分配，方法：文章根据2005-2010年有关我国居民健康、卫生资源和市场化指数相关数据资料，利用随机前沿分析和固定效应面板数据分析方法，通过测量各省市技术效率变化趋势得出相关结论，结果与结论：（1）各省市之间技术效率分布不均匀且普遍偏低；（2）技术效率在各区域之间的变化趋势存在明显差异；（3）政府对医疗卫生领域的投入应继续向中西部地区倾斜。%Objective: To support the distribution of health resources in China better. Methods: According to the related data of resident health, health resources and marketing index in China from 2005 to 2010, related conclusion was given by using stochastic frontier analysis and fixed effect panel analysis method to establish the change trend of technical efficiency in different provinces and cities. Results and Conclusion: (1) The distribution of technical efficiency in different provinces and cities are asymmetric and generally low;(2)there are obvious differences of technical efficiency among different areas;(3)government input in medical and health field should be focused on central and western areas.
Discharge estimation based on machine learning
Institute of Scientific and Technical Information of China (English)
Zhu JIANG; Hui-yan WANG; Wen-wu SONG
2013-01-01
To overcome the limitations of the traditional stage-discharge models in describing the dynamic characteristics of a river, a machine learning method of non-parametric regression, the locally weighted regression method was used to estimate discharge. With the purpose of improving the precision and efficiency of river discharge estimation, a novel machine learning method is proposed:the clustering-tree weighted regression method. First, the training instances are clustered. Second, the k-nearest neighbor method is used to cluster new stage samples into the best-fit cluster. Finally, the daily discharge is estimated. In the estimation process, the interference of irrelevant information can be avoided, so that the precision and efficiency of daily discharge estimation are improved. Observed data from the Luding Hydrological Station were used for testing. The simulation results demonstrate that the precision of this method is high. This provides a new effective method for discharge estimation.
Relative Pose Estimation Algorithm with Gyroscope Sensor
Directory of Open Access Journals (Sweden)
Shanshan Wei
2016-01-01
Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.
Interactive inverse kinematics for human motion estimation
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome
2009-01-01
We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion...... estimation system uses a single camera to estimate the motion of a human. The results show that inverse kinematics can significantly speed up the estimation process, while retaining a quality comparable to a full pose motion estimation system. Our novelty lies primarily in use of inverse kinematics...... to significantly speed up the particle filtering. It should be stressed that the observation part of the system has not been our focus, and as such is described only from a sense of completeness. With our approach it is possible to construct a robust and computationally efficient system for human motion estimation....
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...
Directory of Open Access Journals (Sweden)
Douglas Sampaio Henrique
2005-06-01
Full Text Available Data of 320 animals were obtained from eight comparative slaughter studies performed under tropical conditions and used to estimate the total efficiency of utilization of the metabolizable energy intake (MEI, which varied from 77 to 419 kcal kg-0.75d-1. The provided data also contained direct measures of the recovered energy (RE, which allowed calculating the heat production (HE by difference. The RE was regressed on MEI and deviations from linearity were evaluated by using the F-test. The respective estimates of the fasting heat production and the intercept and the slope that composes the relationship between RE and MEI were 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 and 0.37. Hence, the total efficiency was estimated by dividing the net energy for maintenance and growth by the metabolizable energy intake. The estimated total efficiency of the ME utilization and analogous estimates based on the beef cattle NRC model were employed in an additional study to evaluate their predictive powers in terms of the mean square deviations for both temperate and tropical conditions. The two approaches presented similar predictive powers but the proposed one had a 22% lower mean squared deviation even with its more simplified structure.Foram utilizadas 320 informações obtidas a partir de 8 estudos de abate comparativo conduzidos em condições tropicais para se estimar a eficiência total de utilização da energia metabolizável consumida, a qual variou de 77 a 419kcal kg-0.75d-1. Os dados também continham informações sobre a energia retida (RE, o que permitiu o cálculo da produção de calor por diferença. As estimativas da produção de calor em jejum e dos coeficientes linear e angular da regressão entre RE e MEI foram respectivamente, 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 e 0,37. Em seguida, a eficiência total foi estimada dividindo-se a energia líquida para mantença e produção pelo consumo de energia metabolizável. A eficiência total de
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Optimizing Antenna Size to Maximize Photosynthetic Efficiency
The theoretical upper limit for the operational efficiency of plant photosynthesis has been estimated from a detailed stepwise analysis of the biophysical and biochemical subprocesses to be about 4.6% for C3 and 6.0% C4 plants. The highest short term efficiencies observed for plants in the field, as...
Institute of Scientific and Technical Information of China (English)
田刚; 李南
2011-01-01
Based on the panel data of China＇ s 29 regions during the period of 1991 - 2007, an empirical analysis on the disparity and the exogenous affecting factors of the technical efficiency of logistics industry is condnctod by using a single - stage estimation procedure of the stochastic frontier production function. The results show that the overall technical efficiency is the low, and there are expanding disparities between regions; the proportion of the state - owned economy in fixed assets and government intervention impede the improvement of the efficiency, but the negative impacts are gradually decreasing; human capital and degree of open- ness have the positive effects on the efficiency, in the central and western regions, there is an interaction between lower human capital and degree of openness, which makes a widening gap of the efficiency between these two regions and the east region; with the implementation of west development strategy, the industrial strueture has become a significantly positive affecting factor in the west region; as far as logistics development environment is concerned, the phenomenon of sunken can be found in central region; improvement of logistics has an important meaning to promote the regional coordinative development.%以1991～2（D7年中国大陆29个省级地区面板数据为基础，采用外生性影响因素与随机前沿生产函数模型联合估计的方法（SFA一步法），测算了中国各地区物流业技术效率，考察了人力资本、制度、政府干预、开放程度及产业结构等环境因素对物流业技术效率的影响。主要发现有：考察期间中国物流业技术效率仍处于较低水平，地区间存在差异，且在扩大；政府干预、国有率阻碍技术效率提升，但负面影响在减小；人力资本、开放程度促进技术效率提升，但由于二者存在交互影响使得它们对物流效率的作用在中、西部地区明显弱于东部地区；“西部大开发”战
Dose-response curve estimation: a semiparametric mixture approach.
Yuan, Ying; Yin, Guosheng
2011-12-01
In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples.
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G.
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies? other relevant attributes based on data from actual production vehicles and from recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G. [ICF Incorporated, LLC., Fairfax, VA (United States)
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies, other relevant attributes based on data from actual production vehicles, and recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Efficient estimation of the maximum metabolic productivity of batch systems
Energy Technology Data Exchange (ETDEWEB)
St. John, Peter C.; Crowley, Michael F.; Bomble, Yannick J.
2017-01-31
Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable.
Efficiency estimation for permanent magnets of synchronous wind generators
Directory of Open Access Journals (Sweden)
Serebryakov A.
2014-02-01
Full Text Available Pastāvīgo magnētu pielietošana vējģeneratoros paver plašas iespējas mazas un vidējas jaudas vēja enerģētisko iekārtu (VEI efektivitātes paaugstināšanai. Turklāt samazinās ģeneratoru masa, palielinās drošums, samazinās ekspluatācijas izmaksas. Tomēr, izmantojot augsti enerģētiskos pastāvīgos magnētus ģeneratoros ar paaugstinātu jaudu, rodas virkne problēmu, kuras sekmīgi iespējams pārvarēt, ja pareizi izvieto magnētus pēc to orientācijas, radot magnētisko lauku elektriskās mašīnas gaisa spraugā. Darbā ir mēģināts pierādīt, ka eksistē būtiskas priekšrocības mazas un vidējas jaudas vējģeneratoros, ja pastāvīgie magnēti tiek magnetizēti tangenciāli attiecībā pret gaisa spraugu.
Stochastic Frontier Estimation of Efficient Learning in Video Games
Hamlen, Karla R.
2012-01-01
Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…
Using MCMC chain outputs to efficiently estimate Bayes factors
Morey, Richard D.; Rouder, Jeffrey N.; Pratte, Michael S.; Speckman, Paul L.
2011-01-01
One of the most important methodological problems in psychological research is assessing the reasonableness of null models, which typically constrain a parameter to a specific value such as zero. Bayes factor has been recently advocated in the statistical and psychological literature as a principled
Fast Katz and commuters : efficient estimation of social relatedness.
Energy Technology Data Exchange (ETDEWEB)
On, Byung-Won; Lakshmanan, Laks V. S.; Esfandiar, Pooya; Bonchi, Francesco; Grief, Chen; Gleich, David F.
2010-12-01
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Efficient Acoustic Uncertainty Estimation for Transmission Loss Calculations
2011-09-01
Soc. Am. Vol. 129, 589-592. PUBLICATIONS [1] Kundu , P.K., Cohen, I.M., and Dowling, D.R., Fluid Mechanics , 5th Ed. (Academic Press, Oxford, 2012), 891 pages. ...Transmission Loss Calculations Kevin R. James Department of Mechanical Engineering University of Michigan Ann Arbor, MI 48109-2133 phone: (734) 998...1807 fax: (734) 764-4256 email: krj@umich.edu David R. Dowling Department of Mechanical Engineering University of Michigan Ann Arbor, MI
Parallel Multiscale Autoregressive Density Estimation
Reed, Scott; Oord, Aäron van den; Kalchbrenner, Nal; Colmenarejo, Sergio Gómez; Wang, Ziyu; Belov, Dan; de Freitas, Nando
2017-01-01
PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density e...
Liu Estimator Based on An M Estimator
Directory of Open Access Journals (Sweden)
Hatice ŞAMKAR
2010-01-01
Full Text Available Objective: In multiple linear regression analysis, multicollinearity and outliers are two main problems. In the presence of multicollinearity, biased estimation methods like ridge regression, Stein estimator, principal component regression and Liu estimator are used. On the other hand, when outliers exist in the data, the use of robust estimators reducing the effect of outliers is prefered. Material and Methods: In this study, to cope with this combined problem of multicollinearity and outliers, it is studied Liu estimator based on M estimator (Liu M estimator. In addition, mean square error (MSE criterion has been used to compare Liu M estimator with Liu estimator based on ordinary least squares (OLS estimator. Results: OLS, Huber M, Liu and Liu M estimates and MSEs of these estimates have been calculated for a data set which has been taken form a study of determinants of physical fitness. Liu M estimator has given the best performance in the data set. It is found as both MSE (?LM = 0.0078< MSE (?M = 0.0508 and MSE (?LM = 0.0078< MSE (?L= 0.0085. Conclusion: When there is both outliers and multicollinearity in a dataset, while using of robust estimators reduces the effect of outliers, it could not solve problem of multicollinearity. On the other hand, using of biased methods could solve the problem of multicollinearity, but there is still the effect of outliers on the estimates. In the occurence of both multicollinearity and outliers in a dataset, it has been shown that combining of the methods designed to deal with this problems is better than using them individually.
Energy-efficient cooking methods
Energy Technology Data Exchange (ETDEWEB)
De, Dilip K. [Department of Physics, University of Jos, P.M.B. 2084, Jos, Plateau State (Nigeria); Muwa Shawhatsu, N. [Department of Physics, Federal University of Technology, Yola, P.M.B. 2076, Yola, Adamawa State (Nigeria); De, N.N. [Department of Mechanical and Aerospace Engineering, The University of Texas at Arlington, Arlington, TX 76019 (United States); Ikechukwu Ajaeroh, M. [Department of Physics, University of Abuja, Abuja (Nigeria)
2013-02-15
Energy-efficient new cooking techniques have been developed in this research. Using a stove with 649{+-}20 W of power, the minimum heat, specific heat of transformation, and on-stove time required to completely cook 1 kg of dry beans (with water and other ingredients) and 1 kg of raw potato are found to be: 710 {+-}kJ, 613 {+-}kJ, and 1,144{+-}10 s, respectively, for beans and 287{+-}12 kJ, 200{+-}9 kJ, and 466{+-}10 s for Irish potato. Extensive researches show that these figures are, to date, the lowest amount of heat ever used to cook beans and potato and less than half the energy used in conventional cooking with a pressure cooker. The efficiency of the stove was estimated to be 52.5{+-}2 %. Discussion is made to further improve the efficiency in cooking with normal stove and solar cooker and to save food nutrients further. Our method of cooking when applied globally is expected to contribute to the clean development management (CDM) potential. The approximate values of the minimum and maximum CDM potentials are estimated to be 7.5 x 10{sup 11} and 2.2 x 10{sup 13} kg of carbon credit annually. The precise estimation CDM potential of our cooking method will be reported later.
Distribution Estimation with Smoothed Auxiliary Information
Institute of Scientific and Technical Information of China (English)
Xu Liu; Ahmad Ishfaq
2011-01-01
Distribution estimation is very important in order to make statistical inference for parameters or its functions based on this distribution. In this work we propose an estimator of the distribution of some variable with non-smooth auxiliary information, for example, a symmetric distribution of this variable. A smoothing technique is employed to handle the non-differentiable function. Hence, a distribution can be estimated based on smoothed auxiliary information. Asymptotic properties of the distribution estimator are derived and analyzed.The distribution estimators based on our method are found to be significantly efficient than the corresponding estimators without these auxiliary information. Some simulation studies are conducted to illustrate the finite sample performance of the proposed estimators.
Estimation of linear functionals in emission tomography
Energy Technology Data Exchange (ETDEWEB)
Kuruc, A.
1995-08-01
In emission tomography, the spatial distribution of a radioactive tracer is estimated from a finite sample of externally-detected photons. We present an algorithm-independent theory of statistical accuracy attainable in emission tomography that makes minimal assumptions about the underlying image. Let f denote the tracer density as a function of position (i.e., f is the image being estimated). We consider the problem of estimating the linear functional {Phi}(f) {triple_bond} {integral}{phi}(x)f(x) dx, where {phi} is a smooth function, from n independent observations identically distributed according to the Radon transform of f. Assuming only that f is bounded above and below away from 0, we construct statistically efficient estimators for {Phi}(f). By definition, the variance of the efficient estimator is a best-possible lower bound (depending on and f) on the variance of unbiased estimators of {Phi}(f). Our results show that, in general, the efficient estimator will have a smaller variance than the standard estimator based on the filtered-backprojection reconstruction algorithm. The improvement in performance is obtained by exploiting the range properties of the Radon transform.
Energy Technology Data Exchange (ETDEWEB)
Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman
2004-03-30
Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed
Energy Technology Data Exchange (ETDEWEB)
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
OFDM System Channel Estimation with Hidden Pilot
Institute of Scientific and Technical Information of China (English)
YANG Feng; LIN Cheng-yu; ZHANG Wen-jun
2007-01-01
Channel estimation using pilot is common used in OFDM system. The pilot is usually time division multiplexed with the informative sequence. One of the main drawbacks is bandwidth losing. In this paper, a new method was proposed to perform channel estimation in OFDM system. The pilot is arithmetically added to the output of OFDM modulator. Receiver uses the hidden pilot to get an accurate estimation of the channel. Then pilot is removed after channel estimation. The Cramer-Rao lower bound for this method was deprived. The performance of the algorithm is then shown. Compared with traditional methods, the proposed algorithm increases the bandwidth efficiency dramatically.
External efficiency of schools in pre-vocational secondary education
Timmermans, A. C.; Rekers-Mombarg, L. T. M.; Vreeburg, B. A. N. M.
2016-01-01
The extent to which students from a prevocational secondary school are placed in training programmes in senior secondary vocational education that match with their abilities can be estimated by an indicator called External Efficiency. However, estimating the external efficiency of secondary schools
DEFF Research Database (Denmark)
Lindström, Erik; Ionides, Edward; Frydendall, Jan;
2012-01-01
Parameter estimation in general state space models is not trivial as the likelihood is unknown. We propose a recursive estimator for general state space models, and show that the estimates converge to the true parameters with probability one. The estimates are also asymptotically Cramer-Rao effic...
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...
Ion-by-ion Cooling efficiencies
Gnat, Orly
2011-01-01
We present ion-by-ion cooling efficiencies for low-density gas. We use Cloudy (ver. 08.00) to estimate the cooling efficiencies for each ion of the first 30 elements (H-Zn) individually. We present results for gas temperatures between 1e4 and 1e8K, assuming low densities and optically thin conditions. When nonequilibrium ionization plays a significant role the ionization states deviate from those that obtain in collisional ionization equilibrium (CIE), and the local cooling efficiency at any given temperature depends on specific non-equilibrium ion fractions. The results presented here allow for an efficient estimate of the total cooling efficiency for any ionic composition. We also list the elemental cooling efficiencies assuming CIE conditions. These can be used to construct CIE cooling efficiencies for non-solar abundance ratios, or to estimate the cooling due to elements not explicitly included in any nonequilibrium computation. All the computational results are listed in convenient online tables.
Distributed fusion estimation for sensor networks with communication constraints
Zhang, Wen-An; Song, Haiyu; Yu, Li
2016-01-01
This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.
Empirical likelihood estimation of discretely sampled processes of OU type
Institute of Scientific and Technical Information of China (English)
SUN ShuGuang; ZHANG XinSheng
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
System on Programable Chip for Performance Estimation of Loom Machine
Singh, Gurpreet; S, Surekha K; Pujari, S
2012-01-01
System on programmable chip for the performance estimation of loom machine, which calculates the efficiency and meter count for weaved cloth automatically. Also it calculates the efficiency of loom machine. Previously the same was done using manual process which was not efficient. This article is intended for loom machines which are not modern.
System on Programable Chip for Performance Estimation of Loom Machine
Directory of Open Access Journals (Sweden)
Gurpreet Singh
2012-03-01
Full Text Available System on programmable chip for the performance estimation of loom machine, which calculates the efficiency and meter count for weaved cloth automatically. Also it calculates the efficiency of loom machine. Previously the same was done using manual process which was not efficient. This article is intended for loom machines which are not modern.
Efficiency in higher education
Directory of Open Access Journals (Sweden)
Duguleană, C.
2011-01-01
Full Text Available The National Education Law establishes the principles of equity and efficiency in higher education. The concept of efficiency has different meanings according to the types of funding and the time horizons: short or long term management approaches. Understanding the black box of efficiency may offer solutions for an effective activity. The paper presents the parallel presentation of efficiency in a production firm and in a university, for better understanding the specificities of efficiency in higher education.
Sparse DOA estimation with polynomial rooting
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root...
The energy efficiency of lead selfsputtering
DEFF Research Database (Denmark)
Andersen, Hans Henrik
1968-01-01
The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics......The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics...
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
The Efficiency of Educational Production
DEFF Research Database (Denmark)
Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben
2015-01-01
Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use data envelopment analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...... in order to analyse different aspects of efficiency. In purely quantitative models (where inputs and outputs are expenditure and number of students at different levels of the educational system) and in models where graduation or completion rates are included as indicators of output quality, Finland...
The Efficiency of Educational Production
DEFF Research Database (Denmark)
Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben
Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use Data Envelopment Analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...... in order to analyse different aspects of efficiency. In purely quantitative models (where inputs and outputs are expenditure and number of students at different levels of the educational system) and in models where graduation or completion rates are included as an indicator of output quality, Finland...
Golbabaei-Asl, M.; Knight, D.; Wilkinson, S.
2013-01-01
The thermal efficiency of a SparkJet is evaluated by measuring the impulse response of a pendulum subject to a single spark discharge. The SparkJet is attached to the end of a pendulum. A laser displacement sensor is used to measure the displacement of the pendulum upon discharge. The pendulum motion is a function of the fraction of the discharge energy that is channeled into the heating of the gas (i.e., increasing the translational-rotational temperature). A theoretical perfect gas model is used to estimate the portion of the energy from the heated gas that results in equivalent pendulum displacement as in the experiment. The earlier results from multiple runs for different capacitances of C = 3, 5, 10, 20, and 40(micro)F demonstrate that the thermal efficiency decreases with higher capacitive discharges.1 In the current paper, results from additional run cases have been included and confirm the previous results
Corporate Accounting Policy Efficiency Improvement
Directory of Open Access Journals (Sweden)
Elena K. Vorobei
2013-01-01
Full Text Available The article is focused on the issues of efficient use of different methods of tax accounting for the optimization of income tax expenses and their consolidation in corporate accounting policy. The article makes reasoned conclusions, concerning optimal selection of depreciation methods for tax and bookkeeping accounting and their consolidation in corporate accounting policy and consolidation of optimal methods of cost recovery in production, considering business environment. The impact of the selected methods on corporate income tax rates and corporate property tax rates was traced and tax recovery was estimated.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The genome length is a fundamental feature of a species. This note outlined the general concept and estimation method of the physical and genetic length. Some formulae for estimating the genetic length were derived in detail. As examples, the genome genetic length of Pinus pinaster Ait. and the genetic length of chromosome Ⅵ of Oryza sativa L. were estimated from partial linkage data.
Making Connections with Estimation.
Lobato, Joanne E.
1993-01-01
Describes four methods to structure estimation activities that enable students to make connections between their understanding of numbers and extensions of those concepts to estimating. Presents activities that connect estimation with other curricular areas, other mathematical topics, and real-world applications. (MDH)
Robust Spectral Estimation of Track Irregularity
Institute of Scientific and Technical Information of China (English)
Fu Wenjuan; Chen Chunjun
2005-01-01
Because the existing spectral estimation methods for railway track irregularity analysis are very sensitive to outliers, a robust spectral estimation method is presented to process track irregularity signals. The proposed robust method is verified using 100 groups of clean/contaminated data reflecting he vertical profile irregularity taken from Bejing-Guangzhou railway with a sampling frequency of 33 data every 10 m, and compared with the Auto Regressive (AR) model. The experimental results show that the proposed robust estimation is resistible to noise and insensitive to outliers, and is superior to the AR model in terms of efficiency, stability and reliability.
Multiregional estimation of gross internal migration flows.
Foot, D K; Milne, W J
1989-01-01
"A multiregional model of gross internal migration flows is presented in this article. The interdependence of economic factors across all regions is recognized by imposing a non-stochastic adding-up constraint that requires total inmigration to equal total outmigration in each time period. An iterated system estimation technique is used to obtain asymptotically consistent and efficient parameter estimates. The model is estimated for gross migration flows among the Canadian provinces over the period 1962-86 and then is used to examine the likelihood of a wash-out effect in net migration models. The results indicate that previous approaches that use net migration equations may not always be empirically justified."
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Coordination of Energy Efficiency and Demand Response
Energy Technology Data Exchange (ETDEWEB)
Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison
2010-01-29
This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.
Hardware Accelerated Power Estimation
Coburn, Joel; Raghunathan, Anand
2011-01-01
In this paper, we present power emulation, a novel design paradigm that utilizes hardware acceleration for the purpose of fast power estimation. Power emulation is based on the observation that the functions necessary for power estimation (power model evaluation, aggregation, etc.) can be implemented as hardware circuits. Therefore, we can enhance any given design with "power estimation hardware", map it to a prototyping platform, and exercise it with any given test stimuli to obtain power consumption estimates. Our empirical studies with industrial designs reveal that power emulation can achieve significant speedups (10X to 500X) over state-of-the-art commercial register-transfer level (RTL) power estimation tools.
Optomechanical parameter estimation
Ang, Shan Zheng; Bowen, Warwick P; Tsang, Mankei
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cram\\'er-Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation-maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cram\\'er-Rao bound most closely. With its ability to estimate most of the system parameters, the EM algorithm is envisioned to be useful for optomechanical sensing, atomic magnetometry, and classical or quantum system identification applications in general.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Energy efficiency; Efficacite energetique
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-06-15
This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)
Wight, Jonathan B.
2017-01-01
The normative elements underlying efficiency are more complex than generally portrayed and rely upon ethical frameworks that are generally absent from classroom discussions. Most textbooks, for example, ignore the ethical differences between Pareto efficiency (based on voluntary win-win outcomes) and the modern Kaldor-Hicks efficiency used in…
Making energy efficiency happen
Hirst, E.
1991-04-01
Improving energy efficiency is the least expensive and most effective way to address simultaneously several national issues. Improving efficiency saves money for consumers, increases economic productivity and international competitiveness, enhances national security by lowering oil imports, and reduces the adverse environmental effects of energy production. This paper discusses some of the many opportunities to improve efficiency, emphasizing the roles of government and utilities.
Modified estimators for the change point in hazard function
Karasoy, Durdu; Kadilar, Cem
2009-07-01
We propose the consistent estimators for the change point in hazard function by improving the estimators in [A.P. Basu, J.K. Ghosh, S.N. Joshi, On estimating change point in a failure rate, in: S.S. Gupta, J.O. Berger (Eds.), Statistical Decision Theory and Related Topics IV, vol. 2, Springer-Verlag, 1988, pp. 239-252] and [H.T. Nguyen, G.S. Rogers, E.A. Walker, Estimation in change point hazard rate model, Biometrika 71 (1984) 299-304]. By a simulation study, we show that the proposed estimators are more efficient than the original estimators in many cases.
Barriers to Industrial Energy Efficiency - Study (Appendix A), June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This study examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This study also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
Barriers to Industrial Energy Efficiency - Report to Congress, June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This report examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This report also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
DETERMINANTS OF TECHNICAL EFFICIENCY ON PINEAPPLE FARMING
Directory of Open Access Journals (Sweden)
Nor Diana Mohd Idris
2013-01-01
Full Text Available This study analyzes the pineapple production efficiency of the Integrated Agricultural Development Project (IADP in Samarahan, Sarawak, Malaysia and also studies its determinants. In the study area, IADP plays an important role in rural development as a poverty alleviation program through agricultural development. Despite the many privileges received by the farmers, especially from the government, they are still less efficient. This study adopts the Data Envelopment Analysis (DEA in measuring technical efficiency. Further, this study aims to examine the determinants of efficiency by estimating the level of farmer characteristics as a function of farmerâs age, education level, family labor, years of experience in agriculture, society members and farm size. The estimation used the Tobit Model. The results from this study show that the majority of farmers in IADP are still less efficient. In addition, the results show that relying on family labor, the years of experience in agriculture and also participation as the associationâs member are all important determinants of the level of efficiency for the IADP farmers in the agricultural sector. Increasing agriculture productivity can also guarantee the achievement of a more optimal sustainable living in an effort to increase the farmersâ income. Such information is valuable for extension services and policy makers since it can help to guide policies toward increased efficiency among pineapple farmers in Malaysia.
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
In this paper, we propose using realized range-based estimation to draw inference about the quadratic variation of jump-diffusion processes. We also construct a new test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient, the test...
Range-based estimation of quadratic variation
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
This paper proposes using realized range-based estimators to draw inference about the quadratic variation of jump-diffusion processes. We also construct a range-based test of the hypothesis that an asset price has a continuous sample path. Simulated data shows that our approach is efficient...
Mechanism of estimation of marketing strategy
Directory of Open Access Journals (Sweden)
L.A. Kvyatkovska
2011-12-01
Full Text Available In the article the chart of base elements of forming of marketing strategy of enterprise is given and indexes which can be applied at the estimation of efficiency of its realization within the limits of the models known broadly speaking are determined.
Embedding capacity estimation of reversible watermarking schemes
Indian Academy of Sciences (India)
Rishabh Iyer; Rushikesh Borse; Subhasis Chaudhuri
2014-12-01
Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm iteratively updates the co-occurrence matrix at every stage to estimate the multi-pass embedding capacity, and is much more efficient vis-a-vis actual watermarking. We also suggest an extremely efficient, pre-computable tree based implementation which is conceptually similar to the cooccurrence based method, but provides the estimates in a single iteration, requiring a complexity akin to that of single pass capacity estimation. We also provide upper bounds on the embedding capacity.We finally evaluate performance of our algorithms on recent watermarking algorithms.
Revisiting energy efficiency fundamentals
Energy Technology Data Exchange (ETDEWEB)
Perez-Lombard, L.; Velazquez, D. [Grupo de Termotecnia, Escuela Superior de Ingenieros, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville (Spain); Ortiz, J. [Building Research Establishment (BRE), Garston, Watford, WD25 9XX (United Kingdom)
2013-05-15
Energy efficiency is a central target for energy policy and a keystone to mitigate climate change and to achieve a sustainable development. Although great efforts have been carried out during the last four decades to investigate the issue, focusing into measuring energy efficiency, understanding its trends and impacts on energy consumption and to design effective energy efficiency policies, many energy efficiency-related concepts, some methodological problems for the construction of energy efficiency indicators (EEI) and even some of the energy efficiency potential gains are often ignored or misunderstood, causing no little confusion and controversy not only for laymen but even for specialists. This paper aims to revisit, analyse and discuss some efficiency fundamental topics that could improve understanding and critical judgement of efficiency stakeholders and that could help in avoiding unfounded judgements and misleading statements. Firstly, we address the problem of measuring energy efficiency both in qualitative and quantitative terms. Secondly, main methodological problems standing in the way of the construction of EEI are discussed, and a sequence of actions is proposed to tackle them in an ordered fashion. Finally, two key topics are discussed in detail: the links between energy efficiency and energy savings, and the border between energy efficiency improvement and renewable sources promotion.
Continuous Time Model Estimation
Carl Chiarella; Shenhuai Gao
2004-01-01
This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...
Causal Effect Estimation Methods
2014-01-01
Relationship between two popular modeling frameworks of causal inference from observational data, namely, causal graphical model and potential outcome causal model is discussed. How some popular causal effect estimators found in applications of the potential outcome causal model, such as inverse probability of treatment weighted estimator and doubly robust estimator can be obtained by using the causal graphical model is shown. We confine to the simple case of binary outcome and treatment vari...
Efficient statistical classification of satellite measurements
Mills, Peter
2012-01-01
Supervised statistical classification is a vital tool for satellite image processing. It is useful not only when a discrete result, such as feature extraction or surface type, is required, but also for continuum retrievals by dividing the quantity of interest into discrete ranges. Because of the high resolution of modern satellite instruments and because of the requirement for real-time processing, any algorithm has to be fast to be useful. Here we describe an algorithm based on kernel estimation called Adaptive Gaussian Filtering that incorporates several innovations to produce superior efficiency as compared to three other popular methods: k-nearest-neighbour (KNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). This efficiency is gained with no compromises: accuracy is maintained, while estimates of the conditional probabilities are returned. These are useful not only to gauge the accuracy of an estimate in the absence of its true value, but also to re-calibrate a retrieved image and...
Efficient Rare-event Simulation for Perpetuities
Blanchet, Jose; Zwart, Bert
2012-01-01
We consider perpetuities of the form D = B_1 exp(Y_1) + B_2 exp(Y_1+Y_2) + ... where the Y_j's and B_j's might be i.i.d. or jointly driven by a suitable Markov chain. We assume that the Y_j's satisfy the so-called Cramer condition with associated root theta_{ast} in (0,infty) and that the tails of the B_j's are appropriately behaved so that D is regularly varying with index theta_{ast}. We illustrate by means of an example that the natural state-independent importance sampling estimator obtained by exponentially tilting the Y_j's according to theta_{ast} fails to provide an efficient estimator (in the sense of appropriately controlling the relative mean squared error as the tail probability of interest gets smaller). Then, we construct estimators based on state-dependent importance sampling that are rigorously shown to be efficient.
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Generalized Agile Estimation Method
Directory of Open Access Journals (Sweden)
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...
The unlikely Carnot efficiency.
Verley, Gatien; Esposito, Massimiliano; Willaert, Tim; Van den Broeck, Christian
2014-09-15
The efficiency of an heat engine is traditionally defined as the ratio of its average output work over its average input heat. Its highest possible value was discovered by Carnot in 1824 and is a cornerstone concept in thermodynamics. It led to the discovery of the second law and to the definition of the Kelvin temperature scale. Small-scale engines operate in the presence of highly fluctuating input and output energy fluxes. They are therefore much better characterized by fluctuating efficiencies. In this study, using the fluctuation theorem, we identify universal features of efficiency fluctuations. While the standard thermodynamic efficiency is, as expected, the most likely value, we find that the Carnot efficiency is, surprisingly, the least likely in the long time limit. Furthermore, the probability distribution for the efficiency assumes a universal scaling form when operating close-to-equilibrium. We illustrate our results analytically and numerically on two model systems.
Estimating actual irrigation application by remotely sensed evapotranspiration observations
Droogers, P.; Immerzeel, W.W.; Lorite, I.J.; SWAP, PEST
2010-01-01
Water managers and policy makers need accurate estimates of real (actual) irrigation applications for effective monitoring of irrigation and efficient irrigation management. However, this information is not readily available at field level for larger irrigation areas. An innovative inverse modeling
Heemstra, F.J.
1992-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
DEFF Research Database (Denmark)
Bollerslev, Tim; Todorov, Victor
We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes ...
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Large Deviations of Estimators
Kester, A.D.M.; Kallenberg, W.C.M.
1986-01-01
The performance of a sequence of estimators $\\{T_n\\}$ of $g(\\theta)$ can be measured by its inaccuracy rate $-\\lim \\inf_{n\\rightarrow\\infty} n^{-1} \\log \\mathbb{P}_\\theta(\\|T_n - g(\\theta)\\| > \\varepsilon)$. For fixed $\\varepsilon > 0$ optimality of consistent estimators $\\operatorname{wrt}$ the ina
Directory of Open Access Journals (Sweden)
Sidi Ali Ould Abdi
2011-01-01
Full Text Available Given a stationary multidimensional spatial process (i=(i,i∈ℝ×ℝ,i∈ℤ, we investigate a kernel estimate of the spatial conditional quantile function of the response variable i given the explicative variable i. Asymptotic normality of the kernel estimate is obtained when the sample considered is an -mixing sequence.
DEFF Research Database (Denmark)
Zhang, Tian
2015-01-01
The solar-to-biomass conversion efficiency of natural photosynthesis is between 2.9 and 4.3% for most crops (1, 2). Improving the efficiency of photosynthesis could help increase the appeal of biologically derived fuels and chemicals in comparison with traditional petrochemical processes. One...... approach to make photosynthesis more efficient is to build hybrid systems that combine inorganic and microbial components to produce specific chemicals. Such hybrid bioinorganic systems lead to improved efficiency and specificity and do not require processed vegetable biomass. They thus prevent harmful...... competition between biotechnology and the food industry and avoid the environmental perturbation caused by intensive agriculture (3)....
Institute of Scientific and Technical Information of China (English)
陈伟平
2003-01-01
Time is limited for each reader,but many readers waste a lot oftime on unimportant things, and they read everything at the same speed and in the same way. As a result, they often fail to understand the word and the sentence; they don't know how one sentence relates to another, and how the whole text fixes together. They are not reading efficiently. It is high time that we held a discussion on efficient reading. The author states that efficient reading involves adequate comprehension with appropriate reading rate. Pointing out the factors that influence reading rate and comprehension, this article put forward some suggestions on efficient reading.
Energy Technology Data Exchange (ETDEWEB)
Allmendinger, T.; Bhuyan, B.; Brown, D. N.; Choi, H.; Christ, S.; Covarelli, R.; Davier, M.; Denig, A. G.; Fritsch, M.; Hafner, A.; Kowalewski, R.; Long, O.; Lutz, A. M.; Martinelli, M.; Muller, D. R.; Nugent, I. M.; Lopes Pegna, D.; Purohit, M. V.; Prencipe, E.; Roney, J. M.; Simi, G.; Solodov, E. P.; Telnov, A. V.; Varnes, E.; Waldi, R.; Wang, W. F.; White, R. M.
2012-12-10
We describe several studies to measure the charged track reconstruction efficiency and asymmetry of the BaBar detector. The first two studies measure the tracking efficiency of a charged particle using τ and initial state radiation decays. The third uses the τ decays to study the asymmetry in tracking, the fourth measures the tracking efficiency for low momentum tracks, and the last measures the reconstruction efficiency of K$0\\atop{S}$ particles. The first section also examines the stability of the measurements vs. BaBar running periods.
Estimating Resilience Across Landscapes
Directory of Open Access Journals (Sweden)
Garry D. Peterson
2002-06-01
Full Text Available Although ecological managers typically focus on managing local or regional landscapes, they often have little ability to control or predict many of the large-scale, long-term processes that drive changes within these landscapes. This lack of control has led some ecologists to argue that ecological management should aim to produce ecosystems that are resilient to change and surprise. Unfortunately, ecological resilience is difficult to measure or estimate in the landscapes people manage. In this paper, I extend system dynamics approaches to resilience and estimate resilience using complex landscape simulation models. I use this approach to evaluate cross-scale edge, a novel empirical method for estimating resilience based on landscape pattern. Cross-scale edge provides relatively robust estimates of resilience, suggesting that, with some further development, it could be used as a management tool to provide rough and rapid estimates of areas of resilience and vulnerability within a landscape.
Oenema, O.
2015-01-01
There is a need for communications about resource use efficiency and for measures to increase the use efficiency of nutrients in relation to food production. This holds especially for nitrogen. Nitrogen (N) is essential for life and a main nutrient element. It is needed in relatively large quantitie
Donckers, L.; Smit, G.J.M.; Smit, L.T.
2002-01-01
This paper describes the design of an energy-efficient transport protocol for mobile wireless communication. First we describe the metrics used to measure the energy efficiency of transport protocols. We identify several problem areas that prevent TCP/IP from reaching high levels of energy efficienc
Landscaping for energy efficiency
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-04-01
This publication by the National Renewable Energy Laboratory addresses the use of landscaping for energy efficiency. The topics of the publication include minimizing energy expenses; landscaping for a cleaner environment; climate, site, and design considerations; planning landscape; and selecting and planting trees and shrubs. A source list for more information on landscaping for energy efficiency and a reading list are included.
Energy Efficiency Collaboratives
Energy Technology Data Exchange (ETDEWEB)
Li, Michael [US Department of Energy, Washington, DC (United States); Bryson, Joe [US Environmental Protection Agency, Washington, DC (United States)
2015-09-01
Collaboratives for energy efficiency have a long and successful history and are currently used, in some form, in more than half of the states. Historically, many state utility commissions have used some form of collaborative group process to resolve complex issues that emerge during a rate proceeding. Rather than debate the issues through the formality of a commission proceeding, disagreeing parties are sent to discuss issues in a less-formal setting and bring back resolutions to the commission. Energy efficiency collaboratives take this concept and apply it specifically to energy efficiency programs—often in anticipation of future issues as opposed to reacting to a present disagreement. Energy efficiency collaboratives can operate long term and can address the full suite of issues associated with designing, implementing, and improving energy efficiency programs. Collaboratives can be useful to gather stakeholder input on changing program budgets and program changes in response to performance or market shifts, as well as to provide continuity while regulators come and go, identify additional energy efficiency opportunities and innovations, assess the role of energy efficiency in new regulatory contexts, and draw on lessons learned and best practices from a diverse group. Details about specific collaboratives in the United States are in the appendix to this guide. Collectively, they demonstrate the value of collaborative stakeholder processes in producing successful energy efficiency programs.
Directory of Open Access Journals (Sweden)
D. Sümeyra Demirkıran
2014-03-01
Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records
Assessing efficiency in banking
Directory of Open Access Journals (Sweden)
Knežević Snežana
2012-09-01
Full Text Available The paper is an attempt to assess the productivity and efficiency on the basis of the information found in financial statements and operating evidence, as well as implementation of the DEA method. The definition of both input and output in banking is absolutely clear, however, an adequate analysis of efficiency in banking requires that the right combinations of input and output be selected Every company has its own principles to implement in its operations. One of the most important is surely the efficiency principle. Relevant academic literature offers various combinations of input and output in testing bank efficiency. The developing countries will find it highly important to monitor bank efficiency and compare it to the countries in the region.
Semi-blind Channel Estimator for OFDM-STC
Institute of Scientific and Technical Information of China (English)
WU Yun; LUO Han-wen; SONG Wen-tao; HUANG Jian-guo
2007-01-01
Channel state information of OFDM-STC system is required for maximum likelihood decoding. A subspace-based semi-blind method was proposed for estimating the channels of OFDM-STC systems. The channels are first estimated blindly up to an ambiguity parameter utilizing the nature structure of STC, irrespective of the underlying signal constellations. Furthermore, a method was proposed to resolve the ambiguity by using a few pilot symbols. The simulation results show the proposed semi-blind estimator can achieve higher spectral efficiency and provide improved estimation performance compared to the non-blind estimator.
Ahmad, Mukhtar
2012-01-01
State estimation is one of the most important functions in power system operation and control. This area is concerned with the overall monitoring, control, and contingency evaluation of power systems. It is mainly aimed at providing a reliable estimate of system voltages. State estimator information flows to control centers, where critical decisions are made concerning power system design and operations. This valuable resource provides thorough coverage of this area, helping professionals overcome challenges involving system quality, reliability, security, stability, and economy.Engineers are
Multidimensional kernel estimation
Milosevic, Vukasin
2015-01-01
Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.
Robust global motion estimation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A global motion estimation method based on robust statistics is presented in this paper. By using tracked feature points instead of whole image pixels to estimate parameters the process speeds up. To further speed up the process and avoid numerical instability, an alterative description of the problem is given, and three types of solution to the problem are compared. By using a two step process, the robustness of the estimator is also improved. Automatic initial value selection is an advantage of this method. The proposed approach is illustrated by a set of examples, which shows good results with high speed.
Stagewise generalized estimating equations with grouped variables.
Vaughan, Gregory; Aseltine, Robert; Chen, Kun; Yan, Jun
2017-02-13
Forward stagewise estimation is a revived slow-brewing approach for model building that is particularly attractive in dealing with complex data structures for both its computational efficiency and its intrinsic connections with penalized estimation. Under the framework of generalized estimating equations, we study general stagewise estimation approaches that can handle clustered data and non-Gaussian/non-linear models in the presence of prior variable grouping structure. As the grouping structure is often not ideal in that even the important groups may contain irrelevant variables, the key is to simultaneously conduct group selection and within-group variable selection, that is, bi-level selection. We propose two approaches to address the challenge. The first is a bi-level stagewise estimating equations (BiSEE) approach, which is shown to correspond to the sparse group lasso penalized regression. The second is a hierarchical stagewise estimating equations (HiSEE) approach to handle more general hierarchical grouping structure, in which each stagewise estimation step itself is executed as a hierarchical selection process based on the grouping structure. Simulation studies show that BiSEE and HiSEE yield competitive model selection and predictive performance compared to existing approaches. We apply the proposed approaches to study the association between the suicide-related hospitalization rates of the 15-19 age group and the characteristics of the school districts in the State of Connecticut.
Estimation of scale parameters of logistic distribution by linear functions of sample quantiles
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The large sample estimation of standard deviation of logistic distribution employs the asymptotically best linear unbiased estimators based on sample quantiles. The sample quantiles are established from a pair of single spacing. Finally, a table of the variances and efficiencies of the estimator for 5 ≤ n ≤ 65 is provided and comparison is made with other linear estimators.
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Estimation of waves and ship responses using onboard measurements
DEFF Research Database (Denmark)
Montazeri, Najmeh
This thesis focuses on estimation of waves and ship responses using ship-board measurements. This is useful for development of operational safety and performance efficiency in connection with the broader concept of onboard decision support systems. Estimation of sea state is studied using a set...
How irrigation affects soil erosion estimates of RUSLE2
RUSLE2 is a robust and computationally efficient conservation planning tool that estimates soil, climate, and land management effects on sheet and rill erosion and sediment delivery from hillslopes, and also estimates the size distribution and clay enrichment of sediment delivered to the channel sys...
Tail index and quantile estimation with very high frequency data
J. Daníelsson (Jón); C.G. de Vries (Casper)
1997-01-01
textabstractA precise estimation of the tail shape of forex returns is of critical importance for proper risk assessment. We improve upon the efficiency of conventional estimators that rely on a first order expansion of the tail shape, by using the second order expansion. Here we advocate a moments
Estimation of food consumption
Energy Technology Data Exchange (ETDEWEB)
Callaway, J.M. Jr.
1992-04-01
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Bridged Race Population Estimates
U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...
Directory of Open Access Journals (Sweden)
Douanla Tayo Lionel
2015-08-01
Full Text Available This study aims at identifying the determinants of health expenditure efficiency over the period 2005-2011 using a Tobit Panel Data Approach based on DEA Efficiency Scores. The study was made on 150 countries, where we had 45 high income countries, 40 upper middle income countries, 36 lower middle income countries and 29 low income countries. The estimated results show that Carbon dioxide emission, gross domestic product per capita, improvement in corruption, the age composition of the population, population density and government effectiveness are significant determinants of health expenditure efficiency. Thus, low income countries should promote green growth and all the income groups should intensively fight against poverty.
Performance Analysis of Software Effort Estimation Models Using Neural Networks
Directory of Open Access Journals (Sweden)
P.Latha
2013-08-01
Full Text Available Software Effort estimation involves the estimation of effort required to develop software. Cost overrun, schedule overrun occur in the software development due to the wrong estimate made during the initial stage of software development. Proper estimation is very essential for successful completion of software development. Lot of estimation techniques available to estimate the effort in which neural network based estimation technique play a prominent role. Back propagation Network is the most widely used architecture. ELMAN neural network a recurrent type network can be used on par with Back propagation Network. For a good predictor system the difference between estimated effort and actual effort should be as low as possible. Data from historic project of NASA is used for training and testing. The experimental Results confirm that Back propagation algorithm is efficient than Elman neural network.
On Frequency Domain Models for TDOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll
2015-01-01
of data points, and a known number of harmonics. The more general method only relies on that the source signal is periodic and is, therefore, able to outperform the cross-correlation method in terms of estimation accuracy on both synthetic and real-world data. The simulation code is available online....... of a much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number...
Estimation of OCDD degradation rate in soil
Institute of Scientific and Technical Information of China (English)
ZHAO Xing-ru; ZHENG Ming-hui; ZHANG Bing; QIAN Yong; XU Xiao-bai
2005-01-01
The current concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) were determined in soils contaminated with Chinese technical product sodium pentachlorophenate ( Na- PCP). The estimated half-life of octachlorodioxin (OCDD)was about 14 years in contaminated soils based on the local historical record and mass balance calculation during the past 43 years( 1960-2003). The isomer profiles remained the same regardless of paddy field soil or riverbank soil. The results indicated that the congenerspecific information was efficient in estimating the PCDD/Fs fate in contaminated soils.
Statistical Model-Based Face Pose Estimation
Institute of Scientific and Technical Information of China (English)
GE Xinliang; YANG Jie; LI Feng; WANG Huahua
2007-01-01
A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.
Adaptive vehicle motion estimation and prediction
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
Coal Moisture Estimation in Power Plant Mills
DEFF Research Database (Denmark)
Andersen, Palle; Bendtsen, Jan Dimon; Pedersen, Tom S.;
2009-01-01
Knowledge of moisture content in raw coal feed to a power plant coal mill is of importance for efficient operation of the mill. The moisture is commonly measured approximately once a day using offline chemical analysis methods; however, it would be advantageous for the dynamic operation...... of the plant if an on-line estimate were available. In this paper we such propose an on-line estimator (an extended Kalman filter) that uses only existing measurements. The scheme is tested on actual coal mill data collected during a one-month operating period, and it is found that the daily measured moisture...
Single snapshot DOA estimation
Häcker, P.; Yang, B.
2010-01-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation...
Stochastic Frontier Estimation of a CES Cost Function: The Case of Higher Education in Britain.
Izadi, Hooshang; Johnes, Geraint; Oskrochi, Reza; Crouchley, Robert
2002-01-01
Examines the use of stochastic frontier estimation of constant elasticity of substitution (CES) cost function to measure differences in efficiency among British universities. (Contains 28 references.) (PKP)
Efficient simulation of tail probabilities of sums of correlated lognormals
DEFF Research Database (Denmark)
Asmussen, Søren; Blanchet, José; Juneja, Sandeep;
We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff...
Minority Serving College and University Cost Efficiencies
Directory of Open Access Journals (Sweden)
G. Thomas Sav
2012-01-01
Full Text Available Problem statement: Higher education minority enrollment growth has far outstripped white non-minority growth in the United States. Minority serving colleges and universities have disproportionately attended to that growth and will continue to play a critical role in providing minority educational opportunities in a knowledge based and globally diverse economy. However, they will face new and challenging budgetary and managerial reforms induced by the global financial crisis. As a result, they will be pressured to operate in the future with greater cost efficiency. Approach: Panel data pertaining to minority serving colleges and universities was used along with stochastic frontier analysis to provide cost inefficiency estimates over a four year academic period. Specification of an inefficiency component contained time varying institutional characteristics and influences, including a public Vs. private ownership control. Results: Minority College and university mean inefficiency was estimated to be approximately 1.24, indicating a 24% operation above the frontier cost. The study found that institutions achieved inefficiency reductions or efficiency gains in 2008-09 compared to 2005-06. The findings suggested that private institutions operated at greater inefficiencies relative to their publicly owned counterparts. However, the private sector laid claim to the most efficient institution, but also the most inefficient one. While the public minority serving colleges showed inefficiency deterioration over time, the findings point to private institution efficiency gains. Conclusion/Recommendations: A literature survey indicated that the study could be the first attempt at providing empirical estimates and subsequent insights into the operating cost efficiencies or inefficiencies of minority serving colleges and universities. The cost inefficiency findings suggested that these institutions did compare favorably in their managerial skills. However, as
Dielectric nanoparticles for the enhancement of OLED light extraction efficiency
Mann, Vidhi; Rastogi, Vipul
2017-03-01
This work reports the use of dielectric nanoparticles placed at glass substrate in the improvement of light extraction efficiency of organic light emitting diode (OLED). The nanoparticles will act as scattering medium for the light trapped in the waveguiding modes of the device. The scattering efficiency of dielectric nanoparticles has been calculated by Mie Theory. The finite difference time domain (FDTD) analysis and simulation estimate the effect of dielectric nanoparticles on the light extraction efficiency of OLED. The efficiency depends upon the diameter, interparticle separation and refractive index of dielectric nanoparticles. It is shown that the dielectric nanoparticles layer can enhance the light extraction efficiency by a factor of 1.7.
Learning efficient correlated equilibria
Borowski, Holly P.
2014-12-15
The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents\\' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.
Efficiency of emergency exercises
Energy Technology Data Exchange (ETDEWEB)
Zander, N. [Bundesamt fuer Strahlenschutz (BfS), Oberschleissheim/Neuherberg (Germany); Sogalla, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheim (GRS) mbH, Koeln (Germany)
2011-12-15
In order to cope with accidents beyond the design basis within German nuclear power plants which possibly lead to relevant radiological consequences, the utilities as well as the competent authorities exist emergency organisations. The efficiency, capacity for teamwork and preparedness of such organisations should be tested by regular, efficient exercise activities. Such activities can suitably be based on scenarios which provide challenging tasks for all units of the respective emergency organisation. Thus, the demonstration and further development of the efficiency of the respective organisational structures, including their ability to collaborate, is promoted. (orig.)
The Efficient Windows Collaborative
Energy Technology Data Exchange (ETDEWEB)
Petermann, Nils
2006-03-31
The Efficient Windows Collaborative (EWC) is a coalition of manufacturers, component suppliers, government agencies, research institutions, and others who partner to expand the market for energy efficient window products. Funded through a cooperative agreement with the U.S. Department of Energy, the EWC provides education, communication and outreach in order to transform the residential window market to 70% energy efficient products by 2005. Implementation of the EWC is managed by the Alliance to Save Energy, with support from the University of Minnesota and Lawrence Berkeley National Laboratory.
Two-stage local M-estimation of additive models
Institute of Scientific and Technical Information of China (English)
JIANG JianCheng; LI JianTao
2008-01-01
This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.
Analysis on Summer Precipitation Efficiency in Shenyang
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
[Objective] The research aimed to analyze summer precipitation efficiency in Shenyang.[Method] By using the method which estimated the cloud water resource,based on the vertical accumulated liquid water content which was observed by "QFW-1 dual-channel microwave radiometer" and the rain intensity data which had 1min interval and were inverted by "particle laser-based optical measurement" (Parsivel),the precipitation efficiency in Shenyang area during July-August,2007 was analyzed.[Result] When the rain inte...
Production and efficiency analysis with R
Behr, Andreas
2015-01-01
This textbook introduces essential topics and techniques in production and efficiency analysis and shows how to apply these methods using the statistical software R. Numerous small simulations lead to a deeper understanding of random processes assumed in the models and of the behavior of estimation techniques. Step-by-step programming provides an understanding of advanced approaches such as stochastic frontier analysis and stochastic data envelopment analysis. The text is intended for master students interested in empirical production and efficiency analysis. Readers are assumed to have a general background in production economics and econometrics, typically taught in introductory microeconomics and econometrics courses.
Galbraith, Craig S.; Merrill, Gregory B.
2015-01-01
We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…
Energy Technology Data Exchange (ETDEWEB)
NONE
2010-07-01
Transport is the sector with the highest final energy consumption and, without any significant policy changes, is forecast to remain so. In 2008, the IEA published 25 energy efficiency recommendations, among which four are for the transport sector. The recommendations focus on road transport and include policies on improving tyre energy efficiency, fuel economy standards for both light-duty vehicles and heavy-duty vehicles, and eco-driving. Implementation of the recommendations has been weaker in the transport sector than others. This paper updates the progress that has been made in implementing the transport energy efficiency recommendations in IEA countries since March 2009. Many countries have in the last year moved from 'planning to implement' to 'implementation underway', but none have fully implemented all transport energy efficiency recommendations. The IEA calls therefore for full and immediate implementation of the recommendations.
Meneghelli, Barry J.; Notardonato, William; Fesmire, James E.
2016-01-01
The Cryogenics Test Laboratory, NASA Kennedy Space Center, works to provide practical solutions to low-temperature problems while focusing on long-term technology targets for the energy-efficient use of cryogenics on Earth and in space.
Improving combustion efficiency
Energy Technology Data Exchange (ETDEWEB)
Bulsari, A.; Wemberg, A.; Multas, A. [Nonlinear Solutions Oy (Finland)
2009-06-15
The paper describes how nonlinear models are used to improve the efficiency of coal combustion while keeping NOx and other emissions under desired limits in the Naantali 2 boiler of Fortum Power and Heat Oy. 16 refs., 6 figs.
Directory of Open Access Journals (Sweden)
Branka Gvozdenac-Urošević
2010-01-01
Full Text Available Improving energy efficiency can be powerful tool for achieving sustainable economic development and most important for reducing energy consumption and environmental pollution on national level. Unfortunately, energy efficiency is difficult to conceptualize and there is no single commonly accepted definition. Because of that, measurement of achieved energy efficiency and its impact on national or regional economy is very complicated. Gross Domestic Product (GDP is often used to assess financial effects of applied energy efficiency measures at the national and regional levels. The growth in energy consumption per capita leads to a similar growth in GDP, but it is desirable to provide for the fall of these values. The paper analyzes some standard indicators and the analysis has been applied to a very large sample ensuring reliability for conclusion purposes. National parameters for 128 countries in the world in 2007 were analyzed. In addition to that, parameters were analyzed in the last years for global regions and Serbia.
Thermoelectric efficiency of molecular junctions
Perroni, C. A.; Ninno, D.; Cataudella, V.
2016-09-01
Focus of the review is on experimental set-ups and theoretical proposals aimed to enhance thermoelectric performances of molecular junctions. In addition to charge conductance, the thermoelectric parameter commonly measured in these systems is the thermopower, which is typically rather low. We review recent experimental outcomes relative to several junction configurations used to optimize the thermopower. On the other hand, theoretical calculations provide estimations of all the thermoelectric parameters in the linear and non-linear regime, in particular of the thermoelectric figure of merit and efficiency, completing our knowledge of molecular thermoelectricity. For this reason, the review will mainly focus on theoretical studies analyzing the role of not only electronic, but also of the vibrational degrees of freedom. Theoretical results about thermoelectric phenomena in the coherent regime are reviewed focusing on interference effects which play a significant role in enhancing the figure of merit. Moreover, we review theoretical studies including the effects of molecular many-body interactions, such as electron-vibration couplings, which typically tend to reduce the efficiency. Since a fine tuning of many parameters and coupling strengths is required to optimize the thermoelectric conversion in molecular junctions, new theoretically proposed set-ups are discussed in the conclusions.
Efficient incremental relaying
Fareed, Muhammad Mehboob
2013-07-01
We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.
MAP Estimation, Message Passing, and Perfect Graphs
Jebara, Tony S
2012-01-01
Efficiently finding the maximum a posteriori (MAP) configuration of a graphical model is an important problem which is often implemented using message passing algorithms. The optimality of such algorithms is only well established for singly-connected graphs and other limited settings. This article extends the set of graphs where MAP estimation is in P and where message passing recovers the exact solution to so-called perfect graphs. This result leverages recent progress in defining perfect graphs (the strong perfect graph theorem), linear programming relaxations of MAP estimation and recent convergent message passing schemes. The article converts graphical models into nand Markov random fields which are straightforward to relax into linear programs. Therein, integrality can be established in general by testing for graph perfection. This perfection test is performed efficiently using a polynomial time algorithm. Alternatively, known decomposition tools from perfect graph theory may be used to prove perfection ...
Finding the ciliary beating pattern with optimal efficiency
Osterman, Natan
2011-01-01
We introduce a measure for energetic efficiency of biological cilia acting individually or collectively and numerically determine the optimal beating patterns according to this criterion. Maximizing the efficiency of a single cilium leads to curly, often symmetric and somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal coordination is essential for efficient pumping and the highest efficiency is achieved with antiplectic waves. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible op...
Efficient Windows Collaborative
Energy Technology Data Exchange (ETDEWEB)
Nils Petermann
2010-02-28
The project goals covered both the residential and commercial windows markets and involved a range of audiences such as window manufacturers, builders, homeowners, design professionals, utilities, and public agencies. Essential goals included: (1) Creation of 'Master Toolkits' of information that integrate diverse tools, rating systems, and incentive programs, customized for key audiences such as window manufacturers, design professionals, and utility programs. (2) Delivery of education and outreach programs to multiple audiences through conference presentations, publication of articles for builders and other industry professionals, and targeted dissemination of efficient window curricula to professionals and students. (3) Design and implementation of mechanisms to encourage and track sales of more efficient products through the existing Window Products Database as an incentive for manufacturers to improve products and participate in programs such as NFRC and ENERGY STAR. (4) Development of utility incentive programs to promote more efficient residential and commercial windows. Partnership with regional and local entities on the development of programs and customized information to move the market toward the highest performing products. An overarching project goal was to ensure that different audiences adopt and use the developed information, design and promotion tools and thus increase the market penetration of energy efficient fenestration products. In particular, a crucial success criterion was to move gas and electric utilities to increase the promotion of energy efficient windows through demand side management programs as an important step toward increasing the market share of energy efficient windows.
Single snapshot DOA estimation
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Contingent kernel density estimation.
Directory of Open Access Journals (Sweden)
Scott Fortmann-Roe
Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.
Measuring cardiac efficiency using PET/MRI
Energy Technology Data Exchange (ETDEWEB)
Gullberg, Grand [Lawrence Berkeley National Laboratory (United States); Aparici, Carina Mari; Brooks, Gabriel [University of California San Francisco (United States); Liu, Jing; Guccione, Julius; Saloner, David; Seo, Adam Youngho; Ordovas, Karen Gomes [Lawrence Berkeley National Laboratory (United States)
2015-05-18
Heart failure (HF) is a complex syndrome that is projected by the American Heart Association to cost $160 billion by 2030. In HF, significant metabolic changes and structural remodeling lead to reduced cardiac efficiency. A normal heart is approximately 20-25% efficient measured by the ratio of work to oxygen utilization (1 ml oxygen = 21 joules). The heart requires rapid production of ATP where there is complete turnover of ATP every 10 seconds with 90% of ATP produced by mitochondrial oxidative metabolism requiring substrates of approximately 30% glucose and 65% fatty acids. In our preclinical PET/MRI studies in normal rats, we showed a negative correlation between work and the influx rate constant for 18FDG, confirming that glucose is not the preferred substrate at rest. However, even though fatty acid provides 9 kcal/gram compared to 4 kcal/gram for glucose, in HF the preferred energy source is glucose. PET/MRI offers the potential to study this maladapted mechanism of metabolism by measuring work in a region of myocardial tissue simultaneously with the measure of oxygen utilization, glucose, and fatty acid metabolism and to study cardiac efficiency in the etiology of and therapies for HF. MRI is used to measure strain and a finite element mechanical model using pressure measurements is used to estimate myofiber stress. The integral of strain times stress provides a measure of work which divided by energy utilization, estimated by the production of 11CO2 from intravenous injection of 11C-acetate, provides a measure of cardiac efficiency. Our project involves translating our preclinical research to the clinical application of measuring cardiac efficiency in patients. Using PET/MRI to develop technologies for studying myocardial efficiency in patients, provides an opportunity to relate cardiac work of specific tissue regions to metabolic substrates, and measure the heterogeneity of LV efficiency.
Entropy estimates for simple random fields
DEFF Research Database (Denmark)
Forchhammer, Søren; Justesen, Jørn
1995-01-01
We consider the problem of determining the maximum entropy of a discrete random field on a lattice subject to certain local constraints on symbol configurations. The results are expected to be of interest in the analysis of digitized images and two dimensional codes. We shall present some examples...... of binary and ternary fields with simple constraints. Exact results on the entropies are known only in a few cases, but we shall present close bounds and estimates that are computationally efficient...
Estimating the Upcrossings Index
Sebastião, João Renato; Ferreira, Helena; Pereira, Luísa
2012-01-01
For stationary sequences, under general local and asymptotic dependence restrictions, any limiting point process for time normalized upcrossings of high levels is a compound Poisson process, i.e., there is a clustering of high upcrossings, where the underlying Poisson points represent cluster positions, and the multiplicities correspond to cluster sizes. For such classes of stationary sequences there exists the upcrossings index $\\eta,$ $0\\leq \\eta\\leq 1,$ which is directly related to the extremal index $\\theta,$ $0\\leq \\theta\\leq 1,$ for suitable high levels. In this paper we consider the problem of estimating the upcrossings index $\\eta$ for a class of stationary sequences satisfying a mild oscillation restriction. For the proposed estimator, properties such as consistency and asymptotic normality are studied. Finally, the performance of the estimator is assessed through simulation studies for autoregressive processes and case studies in the fields of environment and finance.
Increased Statistical Efficiency in a Lognormal Mean Model
Directory of Open Access Journals (Sweden)
Grant H. Skrepnek
2014-01-01
Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.
Energy-efficient Localization for Virtual Fencing
Jurdak, Raja; Corke, Peter; Dharman, Dhinesh; Salagnac, Guillaume; Crossman, Chris; Valencia, Philip; Bishop-Hurley, Greg
2010-01-01
International audience; This poster addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. It focuses on combining GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. The focus is on an outdoor location monitoring application for tracking cattle using smart collars that contain wireless sensor nodes and GPS modules. We use empirically-derived models to...
Efficient view synthesis from uncalibrated stereo
Braspenning, Ralph; Op de Beeck, Marc
2006-02-01
For multiview auto-stereoscopic 3D displays, available stereo content needs to be converted to multiview content. In this paper we present a method to efficiently synthesize new views based on the two existing views from the stereo input. This method can be implemented in real-time and is also capable of handling uncalibrated stereo input. Good performance is shown compared to state-of-the-art disparity estimation algorithms and view rendering methods.
Directory of Open Access Journals (Sweden)
Linda J. Blumberg PhD
2016-04-01
Full Text Available Time lags in receiving data from long-standing, large federal surveys complicate real-time estimation of the coverage effects of full Affordable Care Act (ACA implementation. Fast-turnaround household surveys fill some of the void in data on recent changes to insurance coverage, but they lack the historical data that allow analysts to account for trends that predate the ACA, economic fluctuations, and earlier public program expansions when predicting how many people would be uninsured without comprehensive health care reform. Using data from the Current Population Survey (CPS from 2000 to 2012 and the Health Reform Monitoring Survey (HRMS data for 2013 and 2015, this article develops an approach to estimate the number of people who would be uninsured in the absence of the ACA and isolates the change in coverage as of March 2015 that can be attributed to the ACA. We produce counterfactual forecasts of the number of uninsured absent the ACA for 9 age-income groups and compare these estimates with 2015 estimates based on HRMS relative coverage changes applied to CPS-based population estimates. As of March 2015, we find the ACA has reduced the number of uninsured adults by 18.1 million compared with the number who would have been uninsured at that time had the law not been implemented. That decline represents a 46% reduction in the number of nonelderly adults without insurance. The approach developed here can be applied to other federal data and timely surveys to provide a range of estimates of the overall effects of reform.
Ability Estimation for Conventional Tests.
Kim, Jwa K.; Nicewander, W. Alan
1993-01-01
Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…
Sampling and kriging spatial means: efficiency and conditions.
Wang, Jin-Feng; Li, Lian-Fa; Christakos, George
2009-01-01
Sampling and estimation of geographical attributes that vary across space (e.g., area temperature, urban pollution level, provincial cultivated land, regional population mortality and state agricultural production) are common yet important constituents of many real-world applications. Spatial attribute estimation and the associated accuracy depend on the available sampling design and statistical inference modelling. In the present work, our concern is areal attribute estimation, in which the spatial sampling and Kriging means are compared in terms of mean values, variances of mean values, comparative efficiencies and underlying conditions. Both the theoretical analysis and the empirical study show that the mean Kriging technique outperforms other commonly-used techniques. Estimation techniques that account for spatial correlation (dependence) are more efficient than those that do not, whereas the comparative efficiencies of the various methods change with surface features. The mean Kriging technique can be applied to other spatially distributed attributes, as well.
Sampling and Kriging Spatial Means: Efficiency and Conditions
Directory of Open Access Journals (Sweden)
George Christakos
2009-07-01
Full Text Available Sampling and estimation of geographical attributes that vary across space (e.g., area temperature, urban pollution level, provincial cultivated land, regional population mortality and state agricultural production are common yet important constituents of many real-world applications. Spatial attribute estimation and the associated accuracy depend on the available sampling design and statistical inference modelling. In the present work, our concern is areal attribute estimation, in which the spatial sampling and Kriging means are compared in terms of mean values, variances of mean values, comparative efficiencies and underlying conditions. Both the theoretical analysis and the empirical study show that the mean Kriging technique outperforms other commonly-used techniques. Estimation techniques that account for spatial correlation (dependence are more efficient than those that do not, whereas the comparative efficiencies of the various methods change with surface features. The mean Kriging technique can be applied to other spatially distributed attributes, as well.
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Estimating exponential scheduling preferences
DEFF Research Database (Denmark)
Hjorth, Katrine; Börjesson, Maria; Engelson, Leonid
drivers commuting to work in the morning in central Stockholm. The survey contains observations of choices between car and public transport travel alternatives, which differ in terms of departure time, monetary cost, and the distribution of travel time. We develop a discrete choice model to describe......-affine specifications to two benchmarks: The generalised exponential-exponential specification and the conventional α-β-γ specification. As could be expected, exponential preferences are difficult to estimate: The estimated parameters of H and W have large standard errors, and some types of models exhibit severe...
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A. [VTT Energy, Espoo (Finland)
1996-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Henig Proper Efficient Points and Generalized Henig Proper Efficient Points
Institute of Scientific and Technical Information of China (English)
Jing Hui QIU
2009-01-01
Applying the theory of locally convex spaces to vector optimization,we investigate the relationship between Henig proper efficient points and generalized Henig proper efficient points. In particular,we obtain a sufficient and necessary condition for generalized Henig proper efficient points to be Henig proper efficient points. From this,we derive several convenient criteria for judging Henig proper efficient points.
Estimating solar irradiation in the Arctic
Directory of Open Access Journals (Sweden)
Babar Bilal
2016-01-01
Full Text Available Solar radiation data plays an important role in pre-feasibility studies of solar electricity and/or thermal system installations. Measured solar radiation data is scarcely available due to the high cost of installing and maintaining high quality solar radiation sensors (pyranometers. Indirect measured radiation data received from geostationary satellites is unreliable at latitudes above 60 degrees due to the resulting flat viewing angle. In this paper, an empirical method to estimate solar radiation based on minimum climatological data is proposed. Eight sites in Norway are investigated, all of which lie above 60 N. The estimations by the model are compared to the ground measured values and a correlation coefficient of 0.88 was found while over all percentage error was −1.1%. The proposed models is 0.2% efficient on diurnal and 10.8% better in annual estimations than previous models.
Modern statistical estimation via oracle inequalities
Candès, Emmanuel J.
A number of fundamental results in modern statistical theory involve thresholding estimators. This survey paper aims at reconstructing the history of how thresholding rules came to be popular in statistics and describing, in a not overly technical way, the domain of their application. Two notions play a fundamental role in our narrative: sparsity and oracle inequalities. Sparsity is a property of the object to estimate, which seems to be characteristic of many modern problems, in statistics as well as applied mathematics and theoretical computer science, to name a few. `Oracle inequalities' are a powerful decision-theoretic tool which has served to understand the optimality of thresholding rules, but which has many other potential applications, some of which we will discuss.Our story is also the story of the dialogue between statistics and applied harmonic analysis. Starting with the work of Wiener, we will see that certain representations emerge as being optimal for estimation. A leitmotif throughout our exposition is that efficient representations lead to efficient estimation.
A comprehensive estimation method for enterprise capability
Directory of Open Access Journals (Sweden)
Tetiana Kuzhda
2015-11-01
Full Text Available In today’s highly competitive business world, the need for efficient enterprise capability management is greater than ever. As more enterprises begin to compete on a global scale, the effective use of enterprise capability will become imperative for them to improve their business activities. The definition of socio-economic capability of the enterprise has been given and the main components of enterprise capability have been pointed out. The comprehensive method to estimate enterprise capability that takes into account both social and economic components has been offered. The methodical approach concerning integrated estimation of the enterprise capability has been developed. Novelty deals with the inclusion of summary measure of the social component of enterprise capability to define the integrated index of enterprise capability. The practical significance of methodological approach is that the method allows assessing the enterprise capability comprehensively through combining two kinds of estimates – social and economic and converts them into a single integrated indicator. It provides a comprehensive approach to socio-economic estimation of enterprise capability, sets a formal basis for making decisions and helps allocate enterprise resources reasonably. Practical implementation of this method will affect the current condition and trends of the enterprise, help to make forecasts and plans for its development and capability efficient use.
Empirical likelihood estimation of discretely sampled processes of OU type
Institute of Scientific and Technical Information of China (English)
2009-01-01
This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi- tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some mild conditions. When the background driving Lévy process is of type A or B, we show that the intensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.
Anderson, John B
2017-01-01
Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.
Directory of Open Access Journals (Sweden)
Konika Gera
2015-05-01
Full Text Available Solar energy being most common form of renewable energy fails to hold its use in daily life because of its low efficiency and high maintenance costs. However, these short comings can be fought by using the electrostatic mechanism. In this, we charge the dust particles such that they are repelled by the solar panel itself and then removed. This mechanism is relatively cheaper and the power consumption of the same sums to almost zero. Also, efficiency can further be increased by using perovskites that forms an opaque layer over the solar panel. When both of these methods are used as a single hand, the efficiency increases drastically and can be easily employed in mega industries using mega solar panels.
Energy efficiency and behaviour
DEFF Research Database (Denmark)
Carstensen, Trine Agervig; Kunnasvirta, Annika; Kiviluoto, Katariina
The purpose of Work Package 5 Deliverable 5.1., “Case study reports on energy efficiency and behaviour” is to present examples of behavioral interventions to promote energy efficiency in cities. The case studies were collected in January – June 2014, and they represent behavioural interventions...... factors. The main addressees of D5.1. are city officials, NGO representatives, private sector actors and any other relevant actors who plan and realize behavioural energy efficiency interventions in European cities. The WP5 team will also further apply results from D5.1. with a more general model on how...... to conduct behavioural interventions, to be presented in Deliverable 5.5., the final report. This report will also provide valuable information for the WP6 general model for an Energy-Smart City. Altogether 38 behavioural interventions are analysed in this report. Each collected and analysed case study...
DEFF Research Database (Denmark)
Godsk, Mikkel
This paper presents the current approach to implementing educational technology with learning design at the Faculty of Science and Technology, Aarhus University, by introducing the concept of ‘efficient learning design’. The underlying hypothesis is that implementing learning design is more than...... engaging educators in the design process and developing teaching and learning, it is a shift in educational practice that potentially requires a stakeholder analysis and ultimately a business model for the deployment. What is most important is to balance the institutional, educator, and student...... perspectives and to consider all these in conjunction in order to obtain a sustainable, efficient learning design. The approach to deploying learning design in terms of the concept of efficient learning design, the catalyst for educational development, i.e. the learning design model and how it is being used...
Numerical Estimation in Preschoolers
Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco
2010-01-01
Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…
McDonald, Judith A.; Thornton, Robert J.
2011-01-01
Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…
Estimating Gear Teeth Stiffness
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2013-01-01
The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...
Landy, David; Silbert, Noah; Goldin, Aleah
2013-01-01
Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…
Energy efficiency; Energieffektivisering
Energy Technology Data Exchange (ETDEWEB)
2009-06-15
The Low Energy Panel will halve the consumption in buildings. The Panel has proposed a halving of consumption in the construction within 2040 and 20 percent reduction in the consumption in the industry within 2020. The Panel consider it as possible to gradually reduce consumption in buildings from the current level of 80 TWh with 10 TWh in 2020, 25 TWh in 2030 and 40 TWh in 2040. According the committee one such halving can be reached by significant efforts relating to energy efficiency, by greater rehabilitations, energy efficiency in consisting building stock and stricter requirements for new construction. For the industry field the Panel recommend a political goal to be set at least 20 percent reduction in specific energy consumption in the industry and primary industry beyond general technological development by the end of 2020. This is equivalent to approximately 17 TWh based on current level of activity. The Panel believes that a 5 percent reduction should be achieved by the end of 2012 by carrying out simple measures. The Low Energy Panel has since March 2009 considered possibilities to strengthen the authorities' work with energy efficiency in Norway. The wide complex panel adds up proposals for a comprehensive approach for increased energy efficiency in particular in the building- and industry field. The Panel has looked into the potential for energy efficiency, barriers for energy efficiency, assessment of strengths and weaknesses in the existing policy instruments and members of the Panel's recommendations. In addition the report contains a review of theoretical principles for effects of instruments together with an extensive background. One of the committee members have chosen to take special notes on the main recommendations in the report. (AG)