Estimating total mortality and asympotic length of Crangon crangon between 1955 and 2006
Hufnagl, M.; Temming, A.; Siegel, V.; Tulp, I.Y.M.; Bolle, L.J.
2010-01-01
Total mortality (Z, year–1) of southern North Sea brown shrimp (Crangon crangon) was determined as Z = K, based on the von Bertalanffy length–growth constant (K, year–1) and derived from length-based methods. Mortality estimates were based on length frequency distributions obtained from four
Estimating the NIH efficient frontier.
Directory of Open Access Journals (Sweden)
Dimitrios Bisias
Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Directory of Open Access Journals (Sweden)
Chuanhua Wei
2015-07-01
Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.
Econometric Analysis on Efficiency of Estimator
M Khoshnevisan; Kaymram, F.; Singh, Housila P.; Singh, Rajesh; Smarandache, Florentin
2003-01-01
This paper investigates the efficiency of an alternative to ratio estimator under the super population model with uncorrelated errors and a gamma-distributed auxiliary variable. Comparisons with usual ratio and unbiased estimators are also made.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Flexible and efficient estimating equations for variogram estimation
Sun, Ying
2018-01-11
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
The Sharpe ratio of estimated efficient portfolios
Kourtis, Apostolos
2016-01-01
Investors often adopt mean-variance efficient portfolios for achieving superior risk-adjusted returns. However, such portfolios are sensitive to estimation errors, which affect portfolio performance. To understand the impact of estimation errors, I develop simple and intuitive formulas of the squared Sharpe ratio that investors should expect from estimated efficient portfolios. The new formulas show that the expected squared Sharpe ratio is a function of the length of the available data, the ...
MILITARY MISSION COMBAT EFFICIENCY ESTIMATION SYSTEM
Directory of Open Access Journals (Sweden)
Ighoyota B. AJENAGHUGHRURE
2017-04-01
Full Text Available Military infantry recruits, although trained, lacks experience in real-time combat operations, despite the combat simulations training. Therefore, the choice of including them in military operations is a thorough and careful process. This has left top military commanders with the tough task of deciding, the best blend of inexperienced and experienced infantry soldiers, for any military operation, based on available information on enemy strength and capability. This research project delves into the design of a mission combat efficiency estimator (MCEE. It is a decision support system that aids top military commanders in estimating the best combination of soldiers suitable for different military operations, based on available information on enemy’s combat experience. Hence, its advantages consist of reducing casualties and other risks that compromises the entire operation overall success, and also boosting the morals of soldiers in an operation, with such information as an estimation of combat efficiency of their enemies. The system was developed using Microsoft Asp.Net and Sql server backend. A case study test conducted with the MECEE system, reveals clearly that the MECEE system is an efficient tool for military mission planning in terms of team selection. Hence, when the MECEE system is fully deployed it will aid military commanders in the task of decision making on team members’ combination for any given operation based on enemy personnel information that is well known beforehand. Further work on the MECEE will be undertaken to explore fire power types and impact in mission combat efficiency estimation.
Display advertising: Estimating conversion probability efficiently
Safari, Abdollah; Altman, Rachel MacKay; Loughin, Thomas M.
2017-01-01
The goal of online display advertising is to entice users to "convert" (i.e., take a pre-defined action such as making a purchase) after clicking on the ad. An important measure of the value of an ad is the probability of conversion. The focus of this paper is the development of a computationally efficient, accurate, and precise estimator of conversion probability. The challenges associated with this estimation problem are the delays in observing conversions and the size of the data set (both...
Estimating the technical efficiency of Cutflower farms
Directory of Open Access Journals (Sweden)
Kristine Joyce P. Betonio
2016-12-01
Full Text Available This study sought to estimate the technical efficiency of cutflower farms and determine the sources of inefficiency among the farmers. In order to do so, the study had two phases: Phase 1 measured the technical efficiency scores of cutflower farms using data envelopment analysis (DEA. Phase 2 determined the causes of technical inefficiency using Tobit regression analysis. A total of 120 cutflower farms located in in Brgy. Kapatagan, Digos City, Philippines was considered as the decision-making units (DMUs of the study. Only two varieties were considered in the analysis because the 120 farmers have only planted chrysanthemum (Dendranthema grandiflora and baby’s breath (Gypsophila paniculata. Results revealed that there are four farms that are fully-efficient as they exhibited 1.00 technical efficiency scores in both CRS and VRS assumptions: Farm 95, Farm 118, Farm 119 and Farm 120. Of the four, Farm 120 is benchmarked the most, with 82 peers. Tobit model estimation revealed five significant determinants (and considered as sources of technical inefficiency of cutflower farms of Brgy. Kapatagan, Digos City: years of experience in farming, number of relevant seminars and trainings, distance of farm to central market (bagsakan, membership to cooperative, and access to credit.
Efficient volumetric estimation from plenoptic data
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... of an effcient and an ineffcientestimator are compared graphically. The second part of the thesis concerns diffusions withfinite-activity jumps, observed over an increasing interval with terminal sampling time goingto infinity. Asymptotic distribution results are derived for consistent estimators of ageneral...
Efficient bootstrap estimates for tail statistics
Breivik, Øyvind; Aarnes, Ole Johan
2017-03-01
Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.
How efficient is estimation with missing data?
DEFF Research Database (Denmark)
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data percent...
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Directory of Open Access Journals (Sweden)
Bing Liu
2014-01-01
Full Text Available Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is performed in four stages. First, the local attitude estimation error system is described by a polytopic linear model. Then the local error attitude estimator is designed with constant coefficients based on the robust H2 filtering algorithm. Subsequently, the attitude predictions and the local error attitude estimations are calculated by a gyro based model and the local error attitude estimator. Finally, the attitude estimations are updated by the predicted attitude with the local error attitude estimations. Since the local error attitude estimator is with constant coefficients, it does not need to calculate the matrix inversion for the filter gain matrix or update the Jacobian matrixes online to obtain the local error attitude estimations. As a result, the computational complexity of the proposed attitude estimator reduces significantly. Simulation results demonstrate the efficiency of the proposed attitude estimation strategy.
Efficient estimation for high similarities using odd sketches
DEFF Research Database (Denmark)
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
. This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... comparison. We present a theoretical analysis of the quality of estimation to guarantee the reliability of Odd Sketch-based estimators. Our experiments confirm this efficiency, and demonstrate the efficiency of Odd Sketches in comparison with $b$-bit minwise hashing schemes on association rule learning...
Efficient estimation under privacy restrictions in the disclosure problem
Albers, Willem/Wim
1984-01-01
In the disclosure problem already collected data are disclosed only to such extent that the individual privacy is protected to at least a prescribed level. For this problem estimators are introduced which are both simple and efficient.
An Evaluation of Alternate Feed Efficiency Estimates in Beef Cattle
Boaitey, Albert; Goddard, Ellen; Mohapatra, Sandeep; Basarab, John A; Miller, Steve; Crowley, John
2013-01-01
In this paper the issue of nonlinearity and heterogeneity in the derivation of feed efficiency estimates for beef cattle based on performance data for 6253 animals is examined. Using parametric, non-parametric and integer programming approaches, we find evidence of nonlinearity between feed intake and measures of size and growth, and susceptibility of, feed efficiency estimates to assumptions pertaining to heterogeneity between animals and within cohorts. Further, differences in feed cost imp...
Estimating Production Technical Efficiency of Irvingia Seed (Ogbono ...
African Journals Online (AJOL)
This study estimated the production technical efficiency of irvingia seed (Ogbono) farmers in Nsukka agricultural zone in Enugu State, Nigeria. This is against the backdrop of the importance of efficiency as a factor of productivity in a growing economy like Nigeria where resources are scarce and opportunities for new ...
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Efficient Estimation of Nonparametric Genetic Risk Function with Censored Data.
Wang, Yuanjia; Liang, Baosheng; Tong, Xingwei; Marder, Karen; Bressman, Susan; Orr-Urtreger, Avi; Giladi, Nir; Zeng, Donglin
2015-09-01
With an increasing number of causal genes discovered for complex human disorders, it is crucial to assess the genetic risk of disease onset for individuals who are carriers of these causal mutations and compare the distribution of age-at-onset with that in non-carriers. In many genetic epidemiological studies aiming at estimating causal gene effect on disease, the age-at-onset of disease is subject to censoring. In addition, some individuals' mutation carrier or non-carrier status can be unknown due to the high cost of in-person ascertainment to collect DNA samples or death in older individuals. Instead, the probability of these individuals' mutation status can be obtained from various sources. When mutation status is missing, the available data take the form of censored mixture data. Recently, various methods have been proposed for risk estimation from such data, but none is efficient for estimating a nonparametric distribution. We propose a fully efficient sieve maximum likelihood estimation method, in which we estimate the logarithm of the hazard ratio between genetic mutation groups using B-splines, while applying nonparametric maximum likelihood estimation for the reference baseline hazard function. Our estimator can be calculated via an expectation-maximization algorithm which is much faster than existing methods. We show that our estimator is consistent and semiparametrically efficient and establish its asymptotic distribution. Simulation studies demonstrate superior performance of the proposed method, which is applied to the estimation of the distribution of the age-at-onset of Parkinson's disease for carriers of mutations in the leucine-rich repeat kinase 2 gene.
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
) are presented and used to estimate the knee-point level and the compression ratio of the I/O function. A time-efficient paradigm based on the single-interval-up-down method (SIUD; Lecluyse and Meddis (2009)) was used. In contrast with previous studies, the present study used only on-frequency TMCs to derive...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
signals are often contaminated by different types of noise, which challenges the assumption of white Gaussian noise in most state-of-the-art methods. We establish filtering methods based on noise statistics to apply to nonparametric spectral and spatial parameter estimates of the harmonics. We design...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...
Efficient Bayesian Estimation and Combination of GARCH-Type Models
D. David (David); L.F. Hoogerheide (Lennart)
2010-01-01
textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism
Directory of Open Access Journals (Sweden)
Custer Adrian V
2005-08-01
Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to
Estimating the Efficiency of Phosphopeptide Identification by Tandem Mass Spectrometry
Hsu, Chuan-Chih; Xue, Liang; Arrington, Justine V.; Wang, Pengcheng; Paez Paez, Juan Sebastian; Zhou, Yuan; Zhu, Jian-Kang; Tao, W. Andy
2017-06-01
Mass spectrometry has played a significant role in the identification of unknown phosphoproteins and sites of phosphorylation in biological samples. Analyses of protein phosphorylation, particularly large scale phosphoproteomic experiments, have recently been enhanced by efficient enrichment, fast and accurate instrumentation, and better software, but challenges remain because of the low stoichiometry of phosphorylation and poor phosphopeptide ionization efficiency and fragmentation due to neutral loss. Phosphoproteomics has become an important dimension in systems biology studies, and it is essential to have efficient analytical tools to cover a broad range of signaling events. To evaluate current mass spectrometric performance, we present here a novel method to estimate the efficiency of phosphopeptide identification by tandem mass spectrometry. Phosphopeptides were directly isolated from whole plant cell extracts, dephosphorylated, and then incubated with one of three purified kinases—casein kinase II, mitogen-activated protein kinase 6, and SNF-related protein kinase 2.6—along with 16O4- and 18O4-ATP separately for in vitro kinase reactions. Phosphopeptides were enriched and analyzed by LC-MS. The phosphopeptide identification rate was estimated by comparing phosphopeptides identified by tandem mass spectrometry with phosphopeptide pairs generated by stable isotope labeled kinase reactions. Overall, we found that current high speed and high accuracy mass spectrometers can only identify 20%-40% of total phosphopeptides primarily due to relatively poor fragmentation, additional modifications, and low abundance, highlighting the urgent need for continuous efforts to improve phosphopeptide identification efficiency. [Figure not available: see fulltext.
An Efficient Estimator for the Expected Value of Sample Information.
Menzies, Nicolas A
2016-04-01
Conventional estimators for the expected value of sample information (EVSI) are computationally expensive or limited to specific analytic scenarios. I describe a novel approach that allows efficient EVSI computation for a wide range of study designs and is applicable to models of arbitrary complexity. The posterior parameter distribution produced by a hypothetical study is estimated by reweighting existing draws from the prior distribution. EVSI can then be estimated using a conventional probabilistic sensitivity analysis, with no further model evaluations and with a simple sequence of calculations (Algorithm 1). A refinement to this approach (Algorithm 2) uses smoothing techniques to improve accuracy. Algorithm performance was compared with the conventional EVSI estimator (2-level Monte Carlo integration) and an alternative developed by Brennan and Kharroubi (BK), in a cost-effectiveness case study. Compared with the conventional estimator, Algorithm 2 exhibited a root mean square error (RMSE) 8%-17% lower, with far fewer model evaluations (3-4 orders of magnitude). Algorithm 1 produced results similar to those of the conventional estimator when study evidence was weak but underestimated EVSI when study evidence was strong. Compared with the BK estimator, the proposed algorithms reduced RSME by 18%-38% in most analytic scenarios, with 40 times fewer model evaluations. Algorithm 1 performed poorly in the context of strong study evidence. All methods were sensitive to the number of samples in the outer loop of the simulation. The proposed algorithms remove two major challenges for estimating EVSI--the difficulty of estimating the posterior parameter distribution given hypothetical study data and the need for many model evaluations to obtain stable and unbiased results. These approaches make EVSI estimation feasible for a wide range of analytic scenarios. © The Author(s) 2015.
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
Nomogram estimates boiler and fired-heater efficiencies
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V.
1984-06-01
A nomogram permits quick estimates of the efficiency of boilers and fired heaters based on the lower heating value of the fuel. It is valid for coals, oil and natural gas. The paper presents a formula which describes the weight of air required to burn 1,000,000 Btu input of fuel and also presents a formula for flue gas production per pound of fuel. To use the nomogram it is necessary to know higher and lower heating values of the fuel, amount of excess air, and the exit gas ambient air temperatures.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
An efficient algebraic approach to observability analysis in state estimation
Energy Technology Data Exchange (ETDEWEB)
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
Efficient Estimation of the Impact of Observing Systems using EFSO
Kalnay, E.; Chen, T. C.; Jung, J.; Hotta, D.
2016-12-01
Massive amounts of observations are being assimilated every day into modern Numerical Weather Prediction (NWP) systems. This makes difficult to estimate the impact of a new observing system with Observing System Experiments (OSEs) because there is already so much information provided by existing observations. In addition, the large volume of data also prevents monitoring the impact of each assimilated observation with OSEs. We demonstrate in this study how effectively the use of Ensemble Forecast Sensitivity to Observations (EFSO) can help to monitor and improve the impact of observations on the analyses and forecasts. In the first part, we show how to identify detrimental observations within each observing system using EFSO, which has been termed as Proactive Quality Control (PQC). The withdrawal of these detrimental observations leads to improved analyses and subsequent 5-day forecasts, which also serves as a verification of EFSO. We display the feasibility of PQC towards operational implementation. In the second part, it is found that in the estimated impact of MODIS polar winds, one of the contributors of detrimental observations, a positive u-component of the innovation, is associated with detrimental observations, whereas negative u-innovations are generally associated with beneficial impacts. Other biases associated with height, and other variables when the net impact is detrimental were also found. By contrast, such biases do not appear in systems using similar cloud drift wind algorithm, such as GOES satellite winds. The finding provides guidance towards improving the system and gives a clear example of efficient monitoring observations and testing new observing systems using EFSO. The potential of using EFSO to efficiently improve both observations and analyses is clearly shown in this study.
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
Efficient Spectral Power Estimation on an Arbitrary Frequency Scale
Directory of Open Access Journals (Sweden)
F. Zaplata
2015-04-01
Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.
Efficient estimation of smooth distributions from coarsely grouped data.
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-07-15
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm. Optimal values of the smoothing parameter are chosen by minimizing Akaike's Information Criterion. We demonstrate the performance of this method in a simulation study and provide several examples that illustrate the approach. Wide, open-ended intervals can be handled properly. The method can be extended to the estimation of rates when both the event counts and the exposures to risk are grouped. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods...
On efficiency of some ratio estimators in double sampling design ...
African Journals Online (AJOL)
In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of analytic density under random censorship
Belitser, E.
1996-01-01
The nonparametric minimax estimation of an analytic density at a given point, under random censorship, is considered. Although the problem of estimating density is known to be irregular in a certain sense, we make some connections relating this problem to the problem of estimating smooth
Efficient estimation of the partly linear additive Cox model
Huang, Jian
1999-01-01
The partly linear additive Cox model is an extension of the (linear) Cox model and allows flexible modeling of covariate effects semiparametrically. We study asymptotic properties of the maximum partial likelihood estimator of this model with right-censored data using polynomial splines. We show that, with a range of choices of the smoothing parameter (the number of spline basis functions) required for estimation of the nonparametric components, the estimator of the finite-d...
Efficient collaborative sparse channel estimation in massive MIMO
Masood, Mudassir
2015-08-12
We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.
Control grid motion estimation for efficient application of optical flow
Zwart, Christine M
2012-01-01
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable compu
System of Indicators in Social and Economic Estimation of the Regional Energy Efficiency
Directory of Open Access Journals (Sweden)
Ivan P. Danilov
2012-10-01
Full Text Available The article offers social and economic interpretation of the energy efficiency, modeling of the system of indicators in estimation of the regional social and economic efficiency of the energy resources use.
Estimating allocative efficiency in port authorities with demand uncertainty
Hidalgo, Soraya; Núñez-Sánchez, Ramón
2012-01-01
This paper aims to analyse the impact of port demand variability on the allocative efficiency of Spanish port authorities during the period 1986-2007. From a distance function model we can obtain a measure of allocative efficiency using two different approaches: error components approach and parametric approach. We model the variability of port demand from the cyclical component of traffic series by applying the Hodrick-Prescott filter. The results show that the inclusion of variability does ...
pathChirp: Efficient Available Bandwidth Estimation for Network Paths
Energy Technology Data Exchange (ETDEWEB)
Cottrell, Les
2003-04-30
This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sender and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.
Estimation procedure of the efficiency of the heat network segment
Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.
2017-07-01
An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in
Energy-Efficient Channel Estimation in MIMO Systems
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Efficient estimation of overflow probabilities in queues with breakdowns
Kroese, Dirk; Nicola, V.F.
1999-01-01
Efficient importance sampling methods are proposed for the simulation of a single server queue with server breakdowns. The server is assumed to alternate between the operational and failure states according to a continuous time Markov chain. Both, continuous (fluid flow) and discrete (single
Agent-based Security and Efficiency Estimation in Airport Terminals
Janssen, S.A.M.
We investigate the use of an Agent-based framework to identify and quantify the relationship between security and efficiency within airport terminals. In this framework, we define a novel Security Risk Assessment methodology that explicitly models attacker and defender behavior in a security
Efficiency of wear and decalcification technique for estimating the ...
Indian Academy of Sciences (India)
*Corresponding author (Fax, +55-11-30917529; Email, nvsydney@gmail.com). Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was ...
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
Efficient Eye Typing with 9-direction Gaze Estimation
Zhang, Chi; Yao, Rui; Cai, Jinpeng
2017-01-01
Vision based text entry systems aim to help disabled people achieve text communication using eye movement. Most previous methods have employed an existing eye tracker to predict gaze direction and design an input method based upon that. However, these methods can result in eye tracking quality becoming easily affected by various factors and lengthy amounts of time for calibration. Our paper presents a novel efficient gaze based text input method, which has the advantage of low cost and robust...
Efficient estimation of burst-mode LDA power spectra
DEFF Research Database (Denmark)
Velte, Clara Marika; George, William K
2010-01-01
and correlations is included, as well as one regarding the statistical convergence of the spectral estimator for random sampling. Further, the basic representation of the burst-mode LDA signal has been revisited due to observations in recent years of particles not following the flow (e.g., particle clustering......The estimation of power spectra from LDA data provides signal processing challenges for fluid dynamicists for several reasons. Acquisition is dictated by randomly arriving particles which cause the signal to be highly intermittent. This both creates self-noise and causes the measured velocities...... to be biased due to the statistical dependence on the velocity and when the particle arrives. This leads to incorrect moments when the data are evaluated by arithmetically averaging. The signal can be interpreted correctly, however, by applying residence time weighting to all statistics, which eliminates...
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Combining ability estimates of sulfate uptake efficiency in maize.
Motto, M; Saccomani, M; Cacco, G
1982-03-01
Plant root nutrient uptake efficiency may be expressed by the kinetic parameters, Vmax and Km, as well as by normal enzymatic reactions. These parameters are apparently useful indices of the level of adaptation of genotypes to the nutrient conditions in the soil. Moreover, sulfate uptake capacity has been considered a valuable index for selecting superior hybrid characterized by both high grain yield and efficiency in nutrient uptake. Therefore, the purpose of this research was to determine combining ability for sulfate uptake, in a diallel series of maize hybrids among five inbreds. Wide differences among the 20 single crosses were obtained for Vmax and Km. The general and specific combining ability mean squares were significant and important for each trait, indicating the presence of considerable amount of both additive and nonadditive gene effects in the control of sulfate uptake. In addition, maternal and nonmaternal components of F1 reciprocal variation showed sizeable effects on all the traits considered. A relatively high correlation was also detected between Vmax and Km. However, both traits displayed enough variation to suggest that simultaneous improvement of both Vmax and Km should be feasible. A further noteworthy finding in this study was the identification of one inbred line, which was the best overall parent for improving both affinity and velocity strategies of sulfate uptake.
Efficient Topology Estimation for Large Scale Optical Mapping
Elibol, Armagan; Garcia, Rafael
2013-01-01
Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...
Efficient Estimation of Smooth Distributions From Coarsely Grouped Data
DEFF Research Database (Denmark)
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-01-01
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information...... in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most...... to the estimation of rates when both the event counts and the exposures to risk are grouped....
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Directory of Open Access Journals (Sweden)
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Numerically efficient estimation of relaxation effects in magnetic particle imaging.
Rückert, Martin A; Vogel, Patrick; Jakob, Peter M; Behr, Volker C
2013-12-01
Current simulations of the signal in magnetic particle imaging (MPI) are either based on the Langevin function or on directly measuring the system function. The former completely ignores the influence of finite relaxation times of magnetic particles, and the latter requires time-consuming reference scans with an existing MPI scanner. Therefore, the resulting system function only applies for a given tracer type and the properties of the applied scanning trajectory. It requires separate reference scans for different trajectories and does not allow simulating theoretical magnetic particle suspensions. The most accessible and accurate way for including relaxation effects in the signal simulation would be using the Langevin equation. However, this is a very time-consuming approach because it calculates the stochastic dynamics of the individual particles and averages over large particle ensembles. In the current article, a numerically efficient way for approximating the averaged Langevin equation is proposed, which is much faster than the approach based on the Langevin equation because it is directly calculating the averaged time evolution of the magnetization. The proposed simulation yields promising results. Except for the case of small orthogonal offset fields, a high agreement with the full but significantly slower simulation could be shown.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling, Second Version
Kyungchul Song
2009-01-01
Nonrandom sampling schemes are often used in program evaluation settings to improve the quality of inference. This paper considers what we call treatment-based sampling, a type of standard stratified sampling where part of the strata are based on treatment status. This paper establishes semiparametric efficiency bounds for estimators of weighted average treatment effects and average treatment effects on the treated. This paper finds that adapting the efficient estimators of Hirano, Imbens, an...
Reinhard, S.; Lovell, C.A.K.; Thijssen, G.J.
2000-01-01
The objective of this paper is to estimate comprehensive environmental efficiency measures for Dutch dairy farms. The environmental efficiency scores are based on the nitrogen surplus, phosphate surplus and the total (direct and indirect) energy use of an unbalanced panel of dairy farms. We define
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Directory of Open Access Journals (Sweden)
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
A novel estimating method for steering efficiency of the driver with electromyography signals
Liu, Yahui; Ji, Xuewu; Hayama, Ryouhei; Mizuno, Takahiro
2014-05-01
The existing research of steering efficiency mainly focuses on the mechanism efficiency of steering system, aiming at designing and optimizing the mechanism of steering system. In the development of assist steering system especially the evaluation of its comfort, the steering efficiency of driver physiological output usually are not considered, because this physiological output is difficult to measure or to estimate, and the objective evaluation of steering comfort therefore cannot be conducted with movement efficiency perspective. In order to take a further step to the objective evaluation of steering comfort, an estimating method for the steering efficiency of the driver was developed based on the research of the relationship between the steering force and muscle activity. First, the steering forces in the steering wheel plane and the electromyography (EMG) signals of the primary muscles were measured. These primary muscles are the muscles in shoulder and upper arm which mainly produced the steering torque, and their functions in steering maneuver were identified previously. Next, based on the multiple regressions of the steering force and EMG signals, both the effective steering force and the total force capacity of driver in steering maneuver were calculated. Finally, the steering efficiency of driver was estimated by means of the estimated effective force and the total force capacity, which represented the information of driver physiological output of the primary muscles. This research develops a novel estimating method for driver steering efficiency of driver physiological output, including the estimation of both steering force and the force capacity of primary muscles with EMG signals, and will benefit to evaluate the steering comfort with an objective perspective.
Directory of Open Access Journals (Sweden)
Xiaoping Li
2013-01-01
Full Text Available We present an efficient algorithm based on the robust Chinese remainder theorem (CRT to perform single frequency determination from multiple undersampled waveforms. The optimal estimate of common remainder in robust CRT, which plays an important role in the final frequency estimation, is first discussed. To avoid the exhausted searching in the optimal estimation, we then provide an improved algorithm with the same performance but less computation. Besides, the sufficient and necessary condition of the robust estimation was proposed. Numerical examples are also provided to verify the effectiveness of the proposed algorithm and related conclusions.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Yan, Ying; Zhou, Haibo; Cai, Jianwen
2017-09-01
The case-cohort study design is an effective way to reduce cost of assembling and measuring expensive covariates in large cohort studies. Recently, several weighted estimators were proposed for the case-cohort design when multiple diseases are of interest. However, these existing weighted estimators do not make effective use of the covariate information available in the whole cohort. Furthermore, the auxiliary information for the expensive covariates, which may be available in the studies, cannot be incorporated directly. In this article, we propose a class of updated-estimators. We show that, by making effective use of the whole cohort information, the proposed updated-estimators are guaranteed to be more efficient than the existing weighted estimators asymptotically. Furthermore, they are flexible to incorporate the auxiliary information whenever available. The advantages of the proposed updated-estimators are demonstrated in simulation studies and a real data analysis. © 2017, The International Biometric Society.
2015-01-01
The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617
Estimation of energy efficiency of the process of osmotic dehydration of pork meat
Filipović, Vladimir; Ćurčić, Biljana; Nićetin, Milica; Knežević, Violeta; Lević, Ljubinko; Pezo, Lato
2014-01-01
Osmotic dehydration is a low-energy process since water removal from the raw material is without phase change. The goal of this research is to estimate energy efficiency of the process of osmotic dehydration of pork meat at three different process temperatures, in three different osmotic solutions and in co- and counter-current processes. In order to calculate energy efficiency of the process of osmotic dehydration, convective drying was used as a base process for comparison. Levels of the sa...
Estimating the carbon sequestration efficiency of ocean fertilization in ocean models
DeVries, T. J.; Primeau, F. W.; Deutsch, C. A.
2012-12-01
Fertilization of marine biota by direct addition of limiting nutrients, such as iron, has been widely discussed as a possible means of enhancing the oceanic uptake of anthropogenic CO2. Several startup companies have even proposed to offer carbon credits in exchange for fertilizing patches of ocean. However, spatial variability in ocean circulation and air-sea gas exchange causes large regional differences in the efficiency with which carbon can be sequestered in the ocean in response to ocean fertilization. Because of the long timescales associated with carbon sequestration in the ocean, this efficiency cannot be derived from field studies but must be estimated using ocean models. However, due to the computational burden of simulating the oceanic uptake of CO2 in response to ocean fertilization, modeling studies have focused on estimating the carbon sequestration efficiency at only a handful of locations throughout the ocean. Here we present a new method for estimating the carbon sequestration efficiency of ocean fertilization in ocean models. By appropriately linearizing the CO2 system chemistry, we can use the adjoint ocean transport model to efficiently probe the spatial structure of the sequestration efficiency. We apply the method to a global data-constrained ocean circulation model to estimate global patterns of sequestration efficiency at a horizontal resolution of 2 degrees. This calculation produces maps showing where carbon sequestration by ocean fertilization will be most effective. We also show how to rapidly compute the sensitivity of the carbon sequestration efficiency to the spatial pattern of the production and remineralization anomalies produced by ocean fertilization, and we explore these sensitivities in the data-constrained ocean circulation model.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Directory of Open Access Journals (Sweden)
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
Sparse and Efficient Estimation for Partial Spline Models with Increasing Dimension
Zhang, Hao Helen; Shang, Zuofeng
2014-01-01
We consider model selection and estimation for partial spline models and propose a new regularization method in the context of smoothing splines. The regularization method has a simple yet elegant form, consisting of roughness penalty on the nonparametric component and shrinkage penalty on the parametric components, which can achieve function smoothing and sparse estimation simultaneously. We establish the convergence rate and oracle properties of the estimator under weak regularity conditions. Remarkably, the estimated parametric components are sparse and efficient, and the nonparametric component can be estimated with the optimal rate. The procedure also has attractive computational properties. Using the representer theory of smoothing splines, we reformulate the objective function as a LASSO-type problem, enabling us to use the LARS algorithm to compute the solution path. We then extend the procedure to situations when the number of predictors increases with the sample size and investigate its asymptotic properties in that context. Finite-sample performance is illustrated by simulations. PMID:25620808
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators
2012-09-27
existing literature , we adopt the perspective that it is not enough for an estimator to be asymptotically efficient in the number of copies for fixed d. We... Puentes G, Walmsley I A and Lundeen J S 2010 Optimal experiment design for quantum state tomography: fair, precise and minimal tomography Phys. Rev. A
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali
2016-11-08
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
Directory of Open Access Journals (Sweden)
Roberto Gismondi
2014-01-01
Full Text Available In this context, supposing a sampling survey framework and a model-based approach, the attention has been focused on the main features of the optimal prediction strategy for a population mean, which implies knowledge of some model parameters and functions, normally unknown. In particular, a wrong specification of the model individual variances may lead to a serious loss of efficiency of estimates. For this reason, we have proposed some techniques for the estimation of model variances, which instead of being put equal to given a priori functions, can be estimated through historical data concerning past survey occasions. A time series of past observations is almost always available, especially in a longitudinal survey context. Usefulness of the technique proposed has been tested through an empirical attempt, concerning the quarterly wholesale trade survey carried out by ISTAT (Italian National Statistical Institute in the period 2005-2010. In this framework, the problem consists in minimising magnitude of revisions, given by the differences between preliminary estimates (based on the sub-sample of quick respondents and final estimates (which take into account late respondents as well. Main results show that modelvariances estimation through historical data lead to efficiency gains which cannot be neglected. This outcome was confirmed by a further exercise, based on 1000 random replications of late responses.
Directory of Open Access Journals (Sweden)
Bilqis Bolanle Amole,
2016-01-01
Full Text Available Health care services in Nigerian teaching hospitals have been considered as less desirable. In the same vein, studies on the proper application of model in explicating the factors that influence the efficiency of health care delivery are limited. This study therefore deployed Data Envelopment Analysis in estimating health care efficiency in six public teaching hospitals located in southwest Nigeria. To do this, the study gathered secondary data from annual statistical returns of six public teaching hospitals in southwest, Nigeria, spanned five years (2010 - 2014. The data collected were analysed using descriptive and inferential statistical tools. The Inferential statistical tools used included Data Envelopment Analysis (DEA with the aid of DEAP software version 2.1, Tobit model with the aid of STATA version 12.0. The results revealed that the teaching hospitals in Southwest Nigeria were not fully efficient. The average scale inefficiency was estimated to be approximately 18%. Result from the Tobit estimates showed that insufficient number of professional health workers, especially doctors, pharmacist and laboratory technicians engineers and beds space for patient use were responsible for the observed inefficiency in health care delivery, in southwest Nigeria. This study has implication for decisions on effective monitoring of the entire health system towards enhancing quality health care service delivery which would enhance health system efficiency.
LocExpress: a web server for efficiently estimating expression of novel transcripts.
Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge
2016-12-22
The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Program Potential: Estimates of Federal Energy Cost Savings from Energy Efficient Procurement
Energy Technology Data Exchange (ETDEWEB)
Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-09-17
In 2011, energy used by federal buildings cost approximately $7 billion. Reducing federal energy use could help address several important national policy goals, including: (1) increased energy security; (2) lowered emissions of greenhouse gases and other air pollutants; (3) increased return on taxpayer dollars; and (4) increased private sector innovation in energy efficient technologies. This report estimates the impact of efficient product procurement on reducing the amount of wasted energy (and, therefore, wasted money) associated with federal buildings, as well as on reducing the needless greenhouse gas emissions associated with these buildings.
Efficient optimal joint channel estimation and data detection for massive MIMO systems
Alshamary, Haider Ali Jasim
2016-08-15
In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO systems, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Directory of Open Access Journals (Sweden)
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
Gutierrez, Mauricio; Brown, Kenneth
2015-03-01
Classical simulations of noisy stabilizer circuits are often used to estimate the threshold of a quantum error-correcting code (QECC). It is common to model the noise as a depolarizing Pauli channel. However, it is not clear how sensitive a code's threshold is to the noise model, and whether or not a depolarizing channel is a good approximation for realistic errors. We have shown that, at the physical single-qubit level, efficient and more accurate approximations can be obtained. We now examine the feasibility of employing these approximations to obtain better estimates of a QECC's threshold. We calculate the level-1 pseudo-threshold for the Steane [[7,1,3
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally......This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...
Double-Layer Compressive Sensing Based Efficient DOA Estimation in WSAN with Block Data Loss.
Sun, Peng; Wu, Liantao; Yu, Kai; Shao, Huajie; Wang, Zhi
2017-07-22
Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a catastrophic effect on DOA estimation performance in WSAN. In this paper, we propose a double-layer compressive sensing (CS) framework to eliminate the hazards of block data loss, to achieve high accuracy and efficient DOA estimation. In addition to modeling the random packet loss during transmission as a passive CS process, an active CS procedure is introduced at each array sensor to further enhance the robustness of transmission. Furthermore, to avoid the error propagation from signal recovery to DOA estimation in conventional methods, we propose a direct DOA estimation technique under the double-layer CS framework. Leveraging a joint frequency and spatial domain sparse representation of the sensor array data, the fusion center (FC) can directly obtain the DOA estimation results according to the received data packets, skipping the phase of signal recovery. Extensive simulations demonstrate that the double-layer CS framework can eliminate the adverse effects induced by block data loss and yield a superior DOA estimation performance in WSAN.
Energy Technology Data Exchange (ETDEWEB)
Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)
2001-03-01
This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.
B. Bayram; GÜLER, O.; M. Yanar; O. Akbulut
2006-01-01
Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE) and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficien...
On the estimation stability of efficiency and economies of scale in microfinance institutions
Bolli, Thomas; Anh Vo Thi
2012-01-01
This paper uses a panel data set of microfinance institutions (MFI) across the world to compare several identification strategies of cost efficiency and economies of scale. Concretely, we contrast the non-parametric Data Envelopment Analysis (DEA) with the Stochastic Frontier Analysis (SFA) and a distribution-free identification based on time-invariant heterogeneity estimates. Furthermore, we analyze differences of production functions across regions and investigate the relevance of accountin...
Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling
Kyungchul Song
2009-01-01
Nonrandom sampling schemes are often used in program evaluation settings to improve the quality of inference. This paper considers what we call treatment-based sampling, a type of standard stratified sampling where part of the strata are based on treatments. This paper first establishes semiparametric efficiency bounds for estimators of weighted average treatment effects and average treatment effects on the treated. In doing so, this paper illuminates the role of information about the aggrega...
Stephens, Alisa J.; Tchetgen Tchetgen, Eric J.; De Gruttola, Victor
2014-01-01
Semiparametric methods have been developed to increase efficiency of inferences in randomized trials by incorporating baseline covariates. Locally efficient estimators of marginal treatment effects, which achieve minimum variance under an assumed model, are available for settings in which outcomes are independent. The value of the pursuit of locally efficient estimators in other settings, such as when outcomes are multivariate, is often debated. We derive and evaluate semiparametric locally efficient estimators of marginal mean treatment effects when outcomes are correlated; such outcomes occur in randomized studies with clustered or repeated-measures responses. The resulting estimating equations modify existing generalized estimating equations (GEE) by identifying the efficient score under a mean model for marginal effects when data contain baseline covariates. Locally efficient estimators are implemented for longitudinal data with continuous outcomes and clustered data with binary outcomes. Methods are illustrated through application to AIDS Clinical Trial Group Study 398, a longitudinal randomized clinical trial that compared the effects of various protease inhibitors in HIV-positive subjects who had experienced antiretroviral therapy failure. In addition, extensive simulation studies characterize settings in which locally efficient estimators result in efficiency gains over suboptimal estimators and assess their feasibility in practice. Clinical trials; Correlated outcomes; Covariate adjustment; Semiparametric efficiency PMID:24566369
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Shen, Biyao; Zeng, Lijiang; Li, Lifeng
2016-10-20
We present an in situ duty cycle control method that relies on monitoring the TM/TE diffraction efficiency ratio of the -1st transmitted order during photoresist development. Owing to the anisotropic structure of a binary grating, at an appropriately chosen angle of incidence, diffraction efficiencies in TE and TM polarizations vary with groove depth proportionately, while they vary with duty cycle differently. Thus, measuring the TM/TE diffraction efficiency ratio can help estimate the duty cycle during development while eliminating the effect of photoresist thickness uncertainty. We experimentally verified the feasibility of this idea by fabricating photoresist gratings with different photoresist thicknesses. The experimental results were in good agreement with theoretical predictions.
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Estimating returns to scale and scale efficiency for energy consuming appliances
Energy Technology Data Exchange (ETDEWEB)
Blum, Helcio [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group; Okwelum, Edson O. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group
2018-01-18
Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold. We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Balanced Exploration and Exploitation Model search for efficient epipolar geometry estimation.
Goshen, Liran; Shimshoni, Ilan
2008-07-01
The estimation of the epipolar geometry is especially difficult when the putative correspondences include a low percentage of inlier correspondences and/or a large subset of the inliers is consistent with a degenerate configuration of the epipolar geometry that is totally incorrect. This work presents the Balanced Exploration and Exploitation Model Search (BEEM) algorithm that works very well especially for these difficult scenes. The algorithm handles these two problems in a unified manner. It includes the following main features: (1) Balanced use of three search techniques: global random exploration, local exploration near the current best solution and local exploitation to improve the quality of the model. (2) Exploits available prior information to accelerate the search process. (3) Uses the best found model to guide the search process, escape from degenerate models and to define an efficient stopping criterion. (4) Presents a simple and efficient method to estimate the epipolar geometry from two SIFT correspondences. (5) Uses the locality-sensitive hashing (LSH) approximate nearest neighbor algorithm for fast putative correspondences generation. The resulting algorithm when tested on real images with or without degenerate configurations gives quality estimations and achieves significant speedups compared to the state of the art algorithms.
Han, Wenhua; Xu, Jun; Wang, Ping; Tian, Guiyun
2014-06-12
In this paper, efficient managing particle swarm optimization (EMPSO) for high dimension problem is proposed to estimate defect profile from magnetic flux leakage (MFL) signal. In the proposed EMPSO, in order to strengthen exchange of information among particles, particle pair model was built. For more efficient searching when facing different landscapes of problems, velocity updating scheme including three velocity updating models was also proposed. In addition, for more chances to search optimum solution out, automatic particle selection for re-initialization was implemented. The optimization results of six benchmark functions show EMPSO performs well when optimizing 100-D problems. The defect simulation results demonstrate that the inversing technique based on EMPSO outperforms the one based on self-learning particle swarm optimizer (SLPSO), and the estimated profiles are still close to the desired profiles with the presence of low noise in MFL signal. The results estimated from real MFL signal by EMPSO-based inversing technique also indicate that the algorithm is capable of providing an accurate solution of the defect profile with real signal. Both the simulation results and experiment results show the computing time of the EMPSO-based inversing technique is reduced by 20%-30% than that of the SLPSO-based inversing technique.
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)
2009-02-23
Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.
An efficient hidden variable approach to minimal-case camera motion estimation.
Hartley, Richard; Li, Hongdong
2012-12-01
In this paper, we present an efficient new approach for solving two-view minimal-case problems in camera motion estimation, most notably the so-called five-point relative orientation problem and the six-point focal-length problem. Our approach is based on the hidden variable technique used in solving multivariate polynomial systems. The resulting algorithm is conceptually simple, which involves a relaxation which replaces monomials in all but one of the variables to reduce the problem to the solution of sets of linear equations, as well as solving a polynomial eigenvalue problem (polyeig). To efficiently find the polynomial eigenvalues, we make novel use of several numeric techniques, which include quotient-free Gaussian elimination, Levinson-Durbin iteration, and also a dedicated root-polishing procedure. We have tested the approach on different minimal cases and extensions, with satisfactory results obtained. Both the executables and source codes of the proposed algorithms are made freely downloadable.
A geostatistical approach to estimate mining efficiency indicators with flexible meshes
Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2014-05-01
Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.
Energy efficiency estimation of a steam powered LNG tanker using normal operating data
Directory of Open Access Journals (Sweden)
Sinha Rajendra Prasad
2016-01-01
Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop
Zeyliger, Anatoly; Ermolaeva, Olga
2013-04-01
The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web
Computationally efficient permutation-based confidence interval estimation for tail-area FDR
Directory of Open Access Journals (Sweden)
Joshua eMillstein
2013-09-01
Full Text Available Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 hour REM sleep (FDR = .15 (.08, .26. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Directory of Open Access Journals (Sweden)
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically
The use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm
Directory of Open Access Journals (Sweden)
Elsje L. Sisworo
2004-01-01
Full Text Available Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH232PO4 solution was used to determine the active root-zone of oil palm by applying 32P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15N Ammonium Sulphate was used by adding them at the same amount of 16 g 15N plant-1. This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32P was able to distinguish several root zones: 1.5 m – 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone.
Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki
2017-12-01
The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.
An estimation of the column efficiency made by analyzing tailing peak profiles.
Miyabe, Kanji; Matsumoto, Yuko; Niwa, Yusuke; Ando, Nobuho; Guiochon, Georges
2009-11-20
It has been shown previously that most columns are not radially homogeneous but exhibit radial distributions of the mobile phase flow velocity and the local efficiency. Both distributions are best approximated by fourth-order polynomial, with the velocity in the column center being maximum for most packed columns and minimum for monolithic columns. These distributions may be an important source of tailing of elution peaks. The numerical calculation of elution peaks shows how peak tailing is related to the characteristics of these two distributions. An approach is proposed that permits estimations of the true efficiency and of the degree of column radial heterogeneity by inversing this calculation and using the tailing profiles of the elution peaks that are experimentally measured. This method was applied in two concrete cases of tailing peak profiles that had been previously reported and were analyzed by applying this new inverse approach. The results obtained prove its validity and demonstrate that this numerical method is effective for deriving the true column efficiency from experimental tailing profiles.
Directory of Open Access Journals (Sweden)
Liu Jianhua
2010-05-01
Full Text Available Abstract Background DNA replication is a fundamental biological process during S phase of cell division. It is initiated from several hundreds of origins along whole chromosome with different firing efficiencies (or frequency of usage. Direct measurement of origin firing efficiency by techniques such as DNA combing are time-consuming and lack the ability to measure all origins. Recent genome-wide study of DNA replication approximated origin firing efficiency by indirectly measuring other quantities related to replication. However, these approximation methods do not reflect properties of origin firing and may lead to inappropriate estimations. Results In this paper, we develop a probabilistic model - Spanned Firing Time Model (SFTM to characterize DNA replication process. The proposed model reflects current understandings about DNA replication. Origins in an individual cell may initiate replication randomly within a time window, but the population average exhibits a temporal program with some origins replicated early and the others late. By estimating DNA origin firing time and fork moving velocity from genome-wide time-course S-phase copy number variation data, we could estimate firing efficiency of all origins. The estimated firing efficiency is correlated well with the previous studies in fission and budding yeasts. Conclusions The new probabilistic model enables sensitive identification of origins as well as genome-wide estimation of origin firing efficiency. We have successfully estimated firing efficiencies of all origins in S.cerevisiae, S.pombe and human chromosomes 21 and 22.
Hardie, L C; Armentano, L E; Shaver, R D; VandeHaar, M J; Spurlock, D M; Yao, C; Bertics, S J; Contreras-Govea, F E; Weigel, K A
2015-04-01
Prior to genomic selection on a trait, a reference population needs to be established to link marker genotypes with phenotypes. For costly and difficult-to-measure traits, international collaboration and sharing of data between disciplines may be necessary. Our aim was to characterize the combining of data from nutrition studies carried out under similar climate and management conditions to estimate genetic parameters for feed efficiency. Furthermore, we postulated that data from the experimental cohorts within these studies can be used to estimate the net energy of lactation (NE(L)) densities of diets, which can provide estimates of energy intakes for use in the calculation of the feed efficiency metric, residual feed intake (RFI), and potentially reduce the effect of variation in energy density of diets. Individual feed intakes and corresponding production and body measurements were obtained from 13 Midwestern nutrition experiments. Two measures of RFI were considered, RFI(Mcal) and RFI(kg), which involved the regression of NE(L )intake (Mcal/d) or dry matter intake (DMI; kg/d) on 3 expenditures: milk energy, energy gained or lost in body weight change, and energy for maintenance. In total, 677 records from 600 lactating cows between 50 and 275 d in milk were used. Cows were divided into 46 cohorts based on dietary or nondietary treatments as dictated by the nutrition experiments. The realized NE(L) densities of the diets (Mcal/kg of DMI) were estimated for each cohort by totaling the average daily energy used in the 3 expenditures for cohort members and dividing by the cohort's total average daily DMI. The NE(L) intake for each cow was then calculated by multiplying her DMI by her cohort's realized energy density. Mean energy density was 1.58 Mcal/kg. Heritability estimates for RFI(kg), and RFI(Mcal) in a single-trait animal model did not differ at 0.04 for both measures. Information about realized energy density could be useful in standardizing intake data from
Xu, Huihui; Jiang, Mingyan
2015-07-01
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Thekkoot, D M; Kemp, R A; Rothschild, M F; Plastow, G S; Dekkers, J C M
2016-11-01
Increased milk production due to high litter size, coupled with low feed intake, results in excessive mobilization of sow body reserves during lactation, which can have detrimental effects on future reproductive performance. A possibility to prevent this is to improve sow lactation performance genetically, along with other traits of interest. The aim of this study was to estimate breed-specific genetic parameters (by parity, between parities, and across parities) for traits associated with lactation and reproduction in Yorkshire and Landrace sows. Performance data were available for 2,107 sows with 1 to 3 parities (3,424 farrowings total). Sow back fat, loin depth and BW at farrowing, sow feed intake (SFI), and body weight loss (BWL) during lactation showed moderate heritabilities (0.21 to 0.37) in both breeds, whereas back fat loss (BFL), loin depth loss (LDL), and litter weight gain (LWG) showed low heritabilities (0.12 to 0.18). Among the efficiency traits, sow lactation efficiency showed extremely low heritability (near zero) in Yorkshire sows but a slightly higher (0.05) estimate in Landrace sows, whereas sow residual feed intake (SRFI) and energy balance traits showed moderate heritabilities in both breeds. Genetic correlations indicated that SFI during lactation had strong negative genetic correlations with body resource mobilization traits (BWL, BFL, and LDL; -0.35 to -0.70), and tissue mobilization traits in turn had strong positive genetic correlations with LWG (+0.24 to +0.54; < 0.05). However, SFI did not have a significant genetic correlation with LWG. These genetic correlations suggest that SFI during lactation is predominantly used for reducing sow body tissue losses, rather than for milk production. Estimates of genetic correlations for the same trait measured in parities 1 and 2 ranged from 0.64 to 0.98, which suggests that first and later parities should be treated as genetically different for some traits. Genetic correlations estimated between
Buttazzoni, L; Mao, I L
1989-03-01
Net efficiencies of converting intake energy into energy for maintenance, milk production, and body weight change in a lactation were estimated for each of 79 Holstein cows by a two-stage multiple regression model. Cows were from 16 paternal half-sib families, which each had members in at least two of the six herds. Each cow was recorded for milk yield, net energy intake, and three efficiency traits. These were analyzed in a multitrait model containing the same 14 fixed subclasses of herd by season by parity and a random factor of sires for each of the five traits. Restricted maximum likelihood estimates of sire and residual (co)variance components were obtained by an expectation maximization algorithm with canonical transformations. Between milk yield and net energy intake, net energy efficiencies for milk yield, maintenance, and body weight change, the estimated phenotypic correlations were .36, -.02, .08, and -.06, while the genetic correlations were .92, .56, .02, and -.32, respectively. Both genetic and phenotypic correlations were zero between net energy efficiency of maintenance and that of milk yield and .17 between net energy efficiency of body weight change and that of milk yield. The estimated genetic correlation between net efficiency for lactation and milk yield is approximately 60% of that between gross efficiency and milk yield. With a heritability of .32 equivalent.49, net energy efficiency for milk yield may be worth consideration for genetic selection in certain dairy cattle populations.
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Directory of Open Access Journals (Sweden)
A. A. Heidari
2017-09-01
Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Improved barometric and loading efficiency estimates using packers in monitoring wells
Cook, Scott B.; Timms, Wendy A.; Kelly, Bryce F. J.; Barbour, S. Lee
2017-08-01
Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility ( α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage ( S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity ( K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high- K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low- K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.
Hagen, David R; Tidor, Bruce
2015-02-01
A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.
MADRE, JL; YAMAMOTO, T; NAKAGAWA, N; KITAMURA, R
2004-01-01
Hasard-based duration models have been applied in transportation research field to represent the choice or the event along the time dimension. Simulation analysis is carried out in this study to examine the efficiency of non-parametric estimation of the baseline hazard function in comparison with parametric estimation when the distribution is correctly assumed.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Dmitruk, I.; Shynkarenko, Ye; Dmytruk, A.; Aleksiuk, D.; Kadan, V.; Korenyuk, P.; Zubrilin, N.; Blonskiy, I.
2016-12-01
We report experience of assembling an optical Kerr gate setup at the Femtosecond Laser Center for collective use at the Institute of Physics of the National Academy of Sciences of Ukraine. This offers an inexpensive solution to the problem of time-resolved luminescence spectroscopy. Practical aspects of its design and alignment are discussed and its main characteristics are evaluated. Theoretical analysis and numerical estimates are performed to evaluate the efficiency and the response time of an optical Kerr gate setup for fluorescence spectroscopy with subpicosecond time resolution. The theoretically calculated efficiency is compared with the experimentally measured one of ~12% for Crown 5 glass and ~2% for fused silica. Other characteristics of the Kerr gate are analyzed and ways to improve them are discussed. A method of compensation for the refractive index dispersion in a Kerr gate medium is suggested. Examples of the application of the optical Kerr gate setup for measurements of the time-resolved luminescence of Astra Phloxine and Coumarin 30 dyes and both linear and nonlinear chirp parameters of a supercontinuum are presented.
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Directory of Open Access Journals (Sweden)
David Simoncini
Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.
Towards the Estimation of an Efficient Benchmark Portfolio: The Case of Croatian Emerging Market
Directory of Open Access Journals (Sweden)
Dolinar Denis
2017-04-01
Full Text Available The fact that cap-weighted indices provide an inefficient risk-return trade-off is well known today. Various research approaches evolved suggesting alternative to cap-weighting in an effort to come up with a more efficient market index benchmark. In this paper we aim to use such an approach and focus on the Croatian capital market. We apply statistical shrinkage method suggested by Ledoit and Wolf (2004 to estimate the covariance matrix and follow the work of Amenc et al. (2011 to obtain estimates of expected returns that rely on risk-return trade-off. Empirical findings for the proposed portfolio optimization include out-of-sample and robustness testing. This way we compare the performance of the capital-weighted benchmark to the alternative and ensure that consistency is achieved in different volatility environments. Research findings do not seem to support relevant research results for the developed markets but rather complement earlier research (Zoričić et al., 2014.
Manipulating decay time for efficient large-mammal density estimation: gorillas and dung height.
Kuehl, Hjalmar S; Todd, Angelique; Boesch, Christophe; Walsh, Peter D
2007-12-01
Large-mammal surveys often rely on indirect signs such as dung or nests. Sign density is usually translated into animal density using sign production and decay rates. In principle, such auxiliary variable estimates should be made in a spatially unbiased manner. However, traditional decay rate estimation methods entail following many signs from production to disappearance, which, in large study areas, requires extensive travel effort. Consequently, decay rate estimates have tended to be made instead at some convenient but unrepresentative location. In this study we evaluated how much bias might be induced by extrapolating decay rates from unrepresentative locations, how much effort would be required to implement current methods in a spatially unbiased manner, and what alternate approaches might be used to improve precision. To evaluate the extent of bias induced by unrepresentative sampling, we collected data on gorilla dung at several central African sites. Variation in gorilla dung decay rate was enormous, varying by up to an order of magnitude within and between survey zones. We then estimated what the effort-precision relationship would be for a previously suggested "retrospective" decay rate (RDR) method, if it were implemented in a spatially unbiased manner. We also evaluated precision for a marked sign count (MSC) approach that does not use a decay rate. Because they require repeat visits to remote locations, both RDR and MSC require enormous effort levels in order to gain precise density estimates. Finally, we examined an objective criterion for decay (i.e., dung height). This showed great potential for improving RDR efficiency because choosing a high threshold height for decay reduces decay time and, consequently, the number of visits that need to be made to remote areas. The ability to adjust decay time using an objective decay criterion also opens up the potential for a "prospective" decay rate (PDR) approach. Further research is necessary to evaluate
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-07
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... and by reduced outputs, we estimate hyperbolic distance functions that account for reduced technical efficiency both in terms of increased inputs and reduced outputs. We estimate these hyperbolic distance functions as “efficiency effect frontiers” with the Translog functional form and a dynamic specification...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided. © 2015, The International Biometric Society.
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Directory of Open Access Journals (Sweden)
Kazuki Maruta
2016-07-01
Full Text Available Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G. One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
An Efficient Method for Estimating the Hydrodynamic Radius of Disordered Protein Conformations.
Nygaard, Mads; Kragelund, Birthe B; Papaleo, Elena; Lindorff-Larsen, Kresten
2017-08-08
Intrinsically disordered proteins play important roles throughout biology, yet our understanding of the relationship between their sequences, structural properties, and functions remains incomplete. The dynamic nature of these proteins, however, makes them difficult to characterize structurally. Many disordered proteins can attain both compact and expanded conformations, and the level of expansion may be regulated and important for function. Experimentally, the level of compaction and shape is often determined either by small-angle x-ray scattering experiments or pulsed-field-gradient NMR diffusion measurements, which provide ensemble-averaged estimates of the radius of gyration and hydrodynamic radius, respectively. Often, these experiments are interpreted using molecular simulations or are used to validate them. We here provide, to our knowledge, a new and efficient method to calculate the hydrodynamic radius of a disordered protein chain from a model of its structural ensemble. In particular, starting from basic concepts in polymer physics, we derive a relationship between the radius of gyration of a structure and its hydrodynamic ratio, which in turn can be used, for example, to compare a simulated ensemble of conformations to NMR diffusion measurements. The relationship may also be valuable when using NMR diffusion measurements to restrain molecular simulations. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Directory of Open Access Journals (Sweden)
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
Rosati, A; Dejong, T M
2003-06-01
It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, 'daily' photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthesis was estimated from the photosynthetic responses to photosynthetically active radiation (PAR) and from the incident PAR measured on individual leaves during clear and overcast days. Plants were grown with either abundant or scarce N fertilization. Both net and gross daily photosynthesis of leaves were linearly related to daily incident PAR exposure of individual leaves, which implies constant PhRUE over a day throughout the canopy. The slope of these relationships (i.e. PhRUE) increased with N fertilization. When the relationship was calculated for hourly instead of daily periods, the regressions were curvilinear, implying that PhRUE changed with time of the day and incident radiation. Thus, linearity (i.e. constant PhRUE) was achieved only when data were integrated over the entire day. Using average PAR in place of instantaneous incident PAR increased the slope of the relationship between daily photosynthesis and incident PAR of individual leaves, and the regression became curvilinear. The slope of the relationship between daily gross photosynthesis and incident PAR of individual leaves increased for an overcast compared with a clear day, but the slope remained constant for net photosynthesis. This suggests that net PhRUE of all leaves (and thus of the whole canopy) may be constant when integrated over a day, not only when the incident PAR changes with depth in the canopy, but also when it varies on the same leaf owing to changes in daily incident PAR above the canopy. The
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Estimating Forward Pricing Function: How Efficient is Indian Stock Index Futures Market?
Prasad Bhattacharaya; Harminder Singh
2006-01-01
This paper uses Indian stock futures data to explore unbiased expectations and efficient market hypothesis. Having experienced voluminous transactions within a short time span after its establishment, the Indian stock futures market provides an unparalleled case for exploring these issues involving expectation and efficiency. Besides analyzing market efficiency between cash and futures prices using cointegration and error correction frameworks, the efficiency hypothesis is also investigated a...
Context-Aware Hierarchy k-Depth Estimation and Energy-Efficient Clustering in Ad-hoc Network
Mun, Chang-Min; Kim, Young-Hwan; Lee, Kang-Whan
Ad-hoc Network needs an efficient node management because the wireless network has energy constraints. Previously proposed a hierarchical routing protocol reduce the energy consumption and prolong the network lifetime. Further, there is a deficiency conventional works about the energy efficient depth of cluster associated with the overhead. In this paper, we propose a novel top-down method clustered hierarchy, CACHE(Context-aware Clustering Hierarchy and Energy-Efficient). The proposed analysis could estimate the optimum k-depth of hierarchy architecture in clustering protocols.
The efficiency of modified jackknife and ridge type regression estimators: a comparison
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2008-09-01
Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.
Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency.
Ragheb, Hossein; Thacker, Neil A; Bromiley, Paul A; Tautz, Diethard; Schunke, Anja C
2013-04-02
The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by 'semi-landmarks' alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure ('ghost points') can then be used in any further downstream statistical analysis. Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points.
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
Chiu, Jill M Y; Degger, Natalie; Leung, Jonathan Y S; Po, Beverly H K; Zheng, Gene J; Richardson, Bruce J; Lau, T C; Wu, Rudolf S S
2016-11-15
The wide occurrence of endocrine disrupting chemicals (EDCs) and heavy metals in coastal waters has drawn global concern, and thus their removal efficiencies in sewage treatment processes should be estimated. However, low concentrations coupled with high temporal fluctuations of these pollutants present a monitoring challenge. Using semi-permeable membrane devices (SPMDs) and Artificial Mussels (AMs), this study investigates a novel approach to evaluating the removal efficiency of five EDCs and six heavy metals in primary treatment, secondary treatment and chemically enhanced primary treatment (CEPT) processes. In general, the small difference between maximum and minimum values of individual EDCs and heavy metals measured from influents/effluents of the same sewage treatment plant suggests that passive sampling devices can smooth and integrate temporal fluctuations, and therefore have the potential to serve as cost-effective monitoring devices for the estimation of the removal efficiencies of EDCs and heavy metals in sewage treatment works. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Cizelj, L.
1994-10-01
In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL) [Deutsch] In der vorliegenden Arbeit wird ein eigenes probabilistisches Modell vorgeschlagen, das zur Abschaetzung der Effektivitaet einer bestimmten Instandhaltungsstrategie die Wahrscheinlichkeit fuer das Versagen von Rohren verwendet. Das vorgeschlagene Modell beschreibt durchgehende axiale Risse in der von hohen Eigenspannungen charakterisierten Uebergangszone in der Naehe der Befestigung der Dampferzeugerrohre am Einlass des Dampferzeugers. Dabei finden neuere Entwicklungen der probabilistischen Bruchmechanik Anwendung. Streuende Materialkennwerte, Geometriegroessen
Cost Efficiency Estimates for a Sample of Crop and Beef Farms
Langemeier, Michael R.; Jones, Rodney D.
2005-01-01
This paper examines the impact of specialization on the cost efficiency of a sample of crop and beef farms in Kansas. The economic total expense ratio was used to measure cost efficiency. The relationship between the economic total expense ratio and specialization was not significant.
To Estimation of Efficient Usage of Organic Fuel in the Cycle of Steam Power Installations
Directory of Open Access Journals (Sweden)
A. Nesenchuk
2013-01-01
Full Text Available Tendencies of power engineering development in the world were shown in this article. There were carried out the thermodynamic Analysis of efficient usage of different types of fuel. This article shows the obtained result, which reflects that low-calorie fuel (from the point of thermodynamics is more efficient to use at steam power stations then high-energy fuel.
An Integrated Approach for Estimating the Energy Efficiency of Seventeen Countries
Directory of Open Access Journals (Sweden)
Chia-Nan Wang
2017-10-01
Full Text Available Increased energy efficiency is one of the most effective ways to achieve climate change mitigation. This study aims to evaluate the energy efficiency of seventeen countries. The evaluation is based on an integrated method that combines the super slack-based (super SBM model and the Malmquist productivity index (MPI to investigate the energy efficiency of seventeen countries during the period of 2010–2015. The results in this study are that the United States, Columbia, Japan, China, and Saudi Arabia perform the best in energy efficiency, whereas Brazil, Russia, Indonesia, and India perform the worst during the entire sample period. The energy efficiency of these countries arrived mainly from technological improvement. The study provides suggestions for the seventeen countries’ government to control the energy consumption and contribute to environmental protection.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
and feature detection is clearly biased, the estimator is strictly unbiased. The proportionator is compared to the commonly applied sampling technique (systematic uniform random sampling in 2D space or so-called meander sampling) using three biological examples: estimating total number of granule cells in rat...
Robust and Efficient Adaptive Estimation of Binary-Choice Regression Models
Cizek, P.
2007-01-01
The binary-choice regression models such as probit and logit are used to describe the effect of explanatory variables on a binary response vari- able. Typically estimated by the maximum likelihood method, estimates are very sensitive to deviations from a model, such as heteroscedastic- ity and data
The efficient and unbiased estimation of nuclear size variability using the 'selector'
DEFF Research Database (Denmark)
McMillan, A M; Sørensen, Flemming Brandt
1992-01-01
The selector was used to make an unbiased estimation of nuclear size variability in one benign naevocellular skin tumour and one cutaneous malignant melanoma. The results showed that the estimates obtained using the selector were comparable to those obtained using the more time consuming Cavalieri...
Westine, Carl D.
2016-01-01
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Validation of an efficient visual method for estimating leaf area index ...
African Journals Online (AJOL)
This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
An Efficient Operator for the Change Point Estimation in Partial Spline Model.
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-05-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy.
Updated estimation of energy efficiencies of U.S. petroleum refineries.
Energy Technology Data Exchange (ETDEWEB)
Palou-Rivera, I.; Wang, M. Q. (Energy Systems)
2010-12-08
Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels such as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources
Directory of Open Access Journals (Sweden)
Dayan A. Guimarães
2014-10-01
Full Text Available We propose a simple algorithm for improving the MDL (minimum description length estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion estimator and with a recently-proposed estimator based on the random matrix theory (RMT. It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations.
Directory of Open Access Journals (Sweden)
M. Sakthivel
2017-12-01
Full Text Available The genetic parameters of growth traits in the New Zealand White rabbits kept at Sheep Breeding and Research Station, Sandynallah, The Nilgiris, India were estimated by partitioning the variance and covariance components. The (covariance components of body weights at weaning (W42, post-weaning (W70 and marketing (W135 age and growth efficiency traits viz., average daily gain (ADG, relative growth rate (RGR and Kleiber ratio (KR estimated on a daily basis at different age intervals (42 to 70 d; 70 to 135 d and 42 to 135 d from weaning to marketing were estimated by restricted maximum likelihood, fitting 6 animal models with various combinations of direct and maternal effects. Data were collected over a period of 15 yr (1998 to 2012. A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for W42, W70 and W135 were 0.42±0.07, 0.40±0.08 and 0.27±0.07, respectively. Heritability estimates of growth efficiency traits were moderate to high (0.18 to 0.42. Of the total phenotypic variation, maternal genetic effect contributed 14 to 32% for early body weight traits (W42 and W70 and ADG1. The contribution of maternal permanent environmental effect varied from 6 to 18% for W42 and for all the growth efficiency traits except for KR2. Maternal permanent environmental effect on most of the growth efficiency traits was a carryover effect of maternal care during weaning. Direct maternal genetic correlations, for the traits in which maternal genetic effect was significant, were moderate to high in magnitude and negative in direction. Maternal effect declined as the age of the animal increased. The estimates of total heritability and maternal across year repeatability for growth traits were moderate and an optimum rate of genetic progress seems possible in the herd by mass selection. The genetic and phenotypic correlations among body weights
National Research Council Canada - National Science Library
И. М. Шаповалова
2014-01-01
... the financial mechanism of state regulation of socio-economic development is very important, as the efficiency of functioning of the system is a base for acceptance administrative decisions, directed...
Estimation of economic efficiency from restrictions elimination of speed movement of trains
Directory of Open Access Journals (Sweden)
S.Y. Baydak
2012-08-01
Full Text Available The technique which allows to receive at level of engineering calculations preliminary results of economic efficiency from elimination of restrictions for speed movement of trains is resulted.
On the Estimation Stability of Efficiency and Economies of Scale in Microfinance Institutions
Bolli Thomas; Vo Thi Anh
2012-01-01
This paper uses a panel data set of microfinance institutions (MFI) across the world to compare parametric and non parametric identification strategies of cost efficiency and economies of scale. The results suggest that efficiency rankings of MFIs are robust across methodologies but reveal substantial unobserved heterogeneity across countries. We further find substantial economies of scale for a pure financial production process. However accounting for the multi dimensional production process...
Li, Yunji; Li, Peng; Chen, Wen
2017-09-01
An energy-efficient data transmission scheme for remote state estimation is proposed and experimentally evaluated in this paper. This new transmission strategy is presented by proving an upper bound of the system performance. Stability of the remote estimator is proved under the condition that some of the observation measurements are lost in a random probability. An experimental platform of two coupled water tanks with a wireless sensor node is established to evaluate and verify the proposed transmission scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Bias and Efficiency Tradeoffs in the Selection of Storm Suites Used to Estimate Flood Risk
Directory of Open Access Journals (Sweden)
Jordan R. Fischbach
2016-02-01
Full Text Available Modern joint probability methods for estimating storm surge or flood statistics are based on statistical aggregation of many hydrodynamic simulations that can be computationally expensive. Flood risk assessments that consider changing future conditions due to sea level rise or other drivers often require each storm to be run under a range of uncertain scenarios. Evaluating different flood risk mitigation measures, such as levees and floodwalls, in these future scenarios can further increase the computational cost. This study uses the Coastal Louisiana Risk Assessment model (CLARA to examine tradeoffs between the accuracy of estimated flood depth exceedances and the number and type of storms used to produce the estimates. Inclusion of lower-intensity, higher-frequency storms significantly reduces bias relative to storm suites with a similar number of storms but only containing high-intensity, lower-frequency storms, even when estimating exceedances at very low-frequency return periods.
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...
Directory of Open Access Journals (Sweden)
Markku Renfors
2005-04-01
Full Text Available Line-of-sight signal delay estimation is a crucial element for any mobile positioning system. Estimating correctly the delay of the first arriving path is a challenging topic in severe propagation environments, such as closely spaced multipaths in multiuser scenario. Previous studies showed that there are many linear and nonlinear techniques able to solve closely spaced multipaths when the system is not bandlimited. However, using root raised cosine (RRC pulse shaping introduces additional errors in the delay estimation process compared to the case with rectangular pulse shaping due to the inherent bandwidth limitation. In this paper, we introduce a novel technique for asynchronous WCDMA multipath delay estimation based on deconvolution with a suitable pulse shape, followed by Teager-Kaiser operator. The deconvolution stage is employed to reduce the effect of the bandlimiting pulse shape.
Directory of Open Access Journals (Sweden)
Pin-Chih Wang
2014-09-01
Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.
Directory of Open Access Journals (Sweden)
Dina Miftahutdinova
2015-02-01
Full Text Available Purpose: to give the estimation of efficiency of the use of the authorial training program in setup time for the women’s Ukraine rowing team representatives in the process of preparation to Olympic Games in London. Materials and Methods: 10 sportswomen of higher qualification, that are included to Ukraine rowing team, are participated in research. For the estimation of general and special physical preparedness the standard test and rowing ergometre Concept-2 are used. Results: the end of the preparatory period was observed significant improvement significant general and special physical fitness athletes surveyed, and their deviation from the model performance dropped to 5–7%. Conclusions: the high efficiency of the author training program for sportswomen of Ukrainian rowing team are testified and they became the Olympic champions in London.
Energy Technology Data Exchange (ETDEWEB)
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.
2010-04-14
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Hui, Tin-Yu J; Burt, Austin
2015-05-01
The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.
EFFICIENT BLOCK MATCHING ALGORITHMS FOR MOTION ESTIMATION IN H.264/AVC
Directory of Open Access Journals (Sweden)
P. Muralidhar
2015-02-01
Full Text Available In Scalable Video Coding (SVC, motion estimation and inter-layer prediction play an important role in elimination of temporal and spatial redundancies between consecutive layers. This paper evaluates the performance of widely accepted block matching algorithms used in various video compression standards, with emphasis on the performance of the algorithms for a didactic scalable video codec. Many different implementations of Fast Motion Estimation Algorithms have been proposed to reduce motion estimation complexity. The block matching algorithms have been analyzed with emphasis on Peak Signal to Noise Ratio (PSNR and computations using MATLAB. In addition to the above comparisons, a survey has been done on Spiral Search Motion Estimation Algorithms for Video Coding. A New Modified Spiral Search (NMSS motion estimation algorithm has been proposed with lower computational complexity. The proposed algorithm achieves 72% reduction in computation with a minimal (<1dB reduction in PSNR. A brief introduction to the entire flow of video compression H.264/SVC is also presented in this paper.
Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation
Directory of Open Access Journals (Sweden)
M. Agatonović
2012-12-01
Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.
Rhee, Seung-Whee
2017-09-01
In order to separate aluminum from the base-cap of spent fluorescent lamp (SFL), the separation efficiency of hammer crusher unit is estimated by introducing a binary separation theory. The base-cap of SFL is composed by glass fragment, binder, ferrous metal, copper and aluminum. The hammer crusher unit to recover aluminum from the base-cap consists of 3stages of hammer crusher, magnetic separator and vibrating screen. The optimal conditions of rotating speed and operating time in the hammer crusher unit are decided at each stage. At the optimal conditions, the aluminum yield and the separation efficiency of hammer crusher unit are estimated by applying a sequential binary separation theory at each stage. And the separation efficiency between hammer crusher unit and roll crush system is compared to show the performance of aluminum recovery from the base-cap of SFL. Since the separation efficiency can be increased to 99% at stage 3, from the experimental results, it is found that aluminum from the base-cap can be sufficiently recovered by the hammer crusher unit. Copyright © 2017. Published by Elsevier Ltd.
Efficient focusing scheme for transverse velocity estimation using cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
The blood velocity can be estimated by cross-correlation of received RE signals, but only the velocity component along the beam direction is found. A previous paper showed that the complete velocity vector can be estimated, if received signals are focused along lines parallel to the direction...... of the flow. Here a weakly focused transmit field was used along with a simple delay-sum beamformer. A modified method for performing the focusing by employing a special calculation of the delays is introduced, so that a focused emission can be used. The velocity estimation was studied through extensive...... simulations with Field II. A 64-elements, 5 MHz linear array was used. A parabolic velocity profile with a peak velocity of 0.5 m/s was considered for different angles between the flow and the ultrasound beam and for different emit foci. At 60 degrees the relative standard deviation was 0.58 % for a transmit...
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Punjani, Ali; Brubaker, Marcus A; Fleet, David J
2017-04-01
Discovering the 3D atomic-resolution structure of molecules such as proteins and viruses is one of the foremost research problems in biology and medicine. Electron Cryomicroscopy (cryo-EM) is a promising vision-based technique for structure estimation which attempts to reconstruct 3D atomic structures from a large set of 2D transmission electron microscope images. This paper presents a new Bayesian framework for cryo-EM structure estimation that builds on modern stochastic optimization techniques to allow one to scale to very large datasets. We also introduce a novel Monte-Carlo technique that reduces the cost of evaluating the objective function during optimization by over five orders of magnitude. The net result is an approach capable of estimating 3D molecular structure from large-scale datasets in about a day on a single CPU workstation.
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Efficient spectral estimation by MUSIC and ESPRIT with application to sparse FFT
Directory of Open Access Journals (Sweden)
Daniel ePotts
2016-02-01
Full Text Available In spectral estimation, one has to determine all parameters of an exponential sum for finitely many (noisysampled data of this exponential sum.Frequently used methods for spectral estimation are MUSIC (MUltiple SIgnal Classification and ESPRIT (Estimation of Signal Parameters viaRotational Invariance Technique.For a trigonometric polynomial of large sparsity, we present a new sparse fast Fourier transform byshifted sampling and using MUSIC resp. ESPRIT, where the ESPRIT based method has lower computational cost.Later this technique is extended to a new reconstruction of a multivariate trigonometric polynomial of large sparsity for given (noisy values sampled on a reconstructing rank-1 lattice. Numerical experiments illustrate thehigh performance of these procedures.
Satyavada, Harish; Baldi, S.
2018-01-01
The operating principle of condensing boilers is based on exploiting heat from flue gases to pre-heat cold water at the inlet of the boiler: by condensing into liquid form, flue gases recover their latent heat of vaporization, leading to 10–12% increased efficiency with respect to traditional
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina
2012-01-01
by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...
de Graaf, C.S.L.; Kandhai, D.; Sloot, P.M.A.
According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the
C.S.L. de Graaf (Kees); B.D. Kandhai; P.M.A. Sloot
2017-01-01
htmlabstractAccording to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method
Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.
Simpson, William A.
In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…
Roes, A.L.|info:eu-repo/dai/nl/303022388; Patel, M.K.|info:eu-repo/dai/nl/18988097X
2008-01-01
With growing concern on the consequences of climate change and the depletion of fossil fuels, the importance of energy efficiency is globally recognized. In March 2007, the European Council set two key targets to reduce adverse effects of the use of fossil fuels: 1) A reduction of at least 20% in
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
Estimating crop yield using a satellite-based light use efficiency model
DEFF Research Database (Denmark)
Yuan, Wenping; Chen, Yang; Xia, Jiangzhou
2016-01-01
Satellite-based techniques that provide temporally and spatially continuous information over vegetated surfaces have become increasingly important in monitoring the global agriculture yield. In this study, we examine the performance of a light use efficiency model (EC-LUE) for simulating the gross...
Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations
Choudhury, Bhaskar J.; Quick, B. E.
2003-01-01
Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
Energy Technology Data Exchange (ETDEWEB)
Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.
Ytreberg, F Marty; Zuckerman, Daniel M
2004-11-15
A promising method for calculating free energy differences DeltaF is to generate nonequilibrium data via "fast-growth" simulations or by experiments--and then use Jarzynski's equality. However, a difficulty with using Jarzynski's equality is that DeltaF estimates converge very slowly and unreliably due to the nonlinear nature of the calculation--thus requiring large, costly data sets. The purpose of the work presented here is to determine the best estimate for DeltaF given a (finite) set of work values previously generated by simulation or experiment. Exploiting statistical properties of Jarzynski's equality, we present two fully automated analyses of nonequilibrium data from a toy model, and various simulated molecular systems. Both schemes remove at least several k(B)T of bias from DeltaF estimates, compared to direct application of Jarzynski's equality, for modest sized data sets (100 work values), in all tested systems. Results from one of the new methods suggest that good estimates of DeltaF can be obtained using 5-40-fold less data than was previously possible. Extending previous work, the new results exploit the systematic behavior of bias due to finite sample size. A key innovation is better use of the more statistically reliable information available from the raw data.
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2017-01-01
It is well known that pure-tone audiometry does not sufficiently describe individual hearing loss (HL) and that additional measures beyond pure-tone sensitivity might improve the diagnostics of hearing deficits. Specifically, forward masking experiments to estimate basilarmembrane (BM) input...
Relative efficiency of non-parametric error rate estimators in multi ...
African Journals Online (AJOL)
parametric error rate estimators in 2-, 3- and 5-group linear discriminant analysis. The simulation design took into account the number of variables (4, 6, 10, 18) together with the size sample n so that: n/p = 1.5, 2.5 and 5. Three values of the ...
Jiang, George J.; Sluis, Pieter J. van der
1999-01-01
While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
A Methodology for the Estimation of the Wind Generator Economic Efficiency
Zaleskis, G.
2017-12-01
Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Directory of Open Access Journals (Sweden)
Sergey Kharitonov
2015-06-01
Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.
DEFF Research Database (Denmark)
Løhndorf, Petar Durdevic; Pedersen, Simon; Yang, Zhenyu
2016-01-01
to reach the desired oil production capacity, consequently the discharged amount of oil increases.This leads to oceanic pollution, which has been linked to various negative effects in the marine life. The current legislation requires a maximum oil discharge of 30 parts per million (PPM). The oil in water...... a novel control technology which is based on online and dynamic OiW measurements. This article evaluates some currently available on- line measuring technologies to measure OiW, and the possibility to use these techniques for hydrocyclone efficiency evaluation, model development and as a feedback...
A Methodology for the Estimation of the Wind Generator Economic Efficiency
Directory of Open Access Journals (Sweden)
Zaleskis G.
2017-12-01
Full Text Available Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.
Directory of Open Access Journals (Sweden)
Borsukiewicz-Gozdur Aleksandra
2007-01-01
Full Text Available In the work presented are the results of investigations regarding the effectiveness of operation of power plant fed by geothermal water with the flow rate of 100, 150, and 200 m3/h and temperatures of 70, 80, and 90 °C, i. e. geothermal water with the parameters available in some towns of West Pomeranian region as well as in Stargard Szczecinski (86.4 °C, Poland. The results of calculations regard the system of geothermal power plant with possibility of utilization of heat for technological purposes. Analyzed are possibilities of application of different working fluids with respect to the most efficient utilization of geothermal energy. .
Directory of Open Access Journals (Sweden)
Pioz Maryline
2011-04-01
Full Text Available Abstract Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA model combined with a spatial error simultaneous autoregressive model (SARerr model to estimate the speed of diffusion of bluetongue (BT, an infectious disease of ruminants caused by bluetongue virus (BTV and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SARerr model, which led to a trend SARerr model. Overall, BT spread from north-eastern to south-western France. The average trend SARerr-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SARerr-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SARerr models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.
Efficient time of arrival estimation in the presence of multipath propagation.
Villemin, Guilhem; Fossati, Caroline; Bourennane, Salah
2013-10-01
Most of acoustical experiments face multipath propagation issues. The times of arrival of different ray paths on a sensor can be very close. To estimate them, high resolution algorithms have been developed. The main drawback of these methods is their need of a full rank spectral matrix of the signals. The frequential smoothing technique overcomes this issue by dividing the received signal spectrum into several overlapping sub-bands. This division yields a transfer matrix that may suffer rank deficiency. In this paper, a new criterion to optimally choose the sub-band frequencies is proposed. Encouraging results were obtained on real-world data.
Shalkov, Anton; Mamaeva, Mariya
2017-11-01
The article considers the questions of application of nondestructive methods control of reducers of conveyor belts as a means of transport. Particular attention is paid to such types of diagnostics of technical condition as thermal control and analysis of the state of lubricants. The urgency of carrying out types of nondestructive testing presented in the article is determined by the increase of energy efficiency of transport systems of coal and mining enterprises, in particular, reducers of belt conveyors. Periodic in-depth spectral-emission diagnostics and monitoring of a temperature mode of operation oil in the operation of the control equipment and its technical condition and prevent the MTBF allows the monitoring of the actual technical condition of the gearbox of a belt conveyor. In turn, the thermal imaging diagnostics reveals defects at the earliest stage of their formation and development, which allows planning the volumes and terms of equipment repair. Presents diagnostics of the technical condition will allow monitoring in time the technical condition of the equipment and avoiding its premature failure. Thereby it will increase the energy efficiency of both the transport system and the enterprise as a whole, and also avoid unreasonable increases in operating and maintenance costs.
Directory of Open Access Journals (Sweden)
Shalkov Anton
2017-01-01
Full Text Available The article considers the questions of application of nondestructive methods control of reducers of conveyor belts as a means of transport. Particular attention is paid to such types of diagnostics of technical condition as thermal control and analysis of the state of lubricants. The urgency of carrying out types of nondestructive testing presented in the article is determined by the increase of energy efficiency of transport systems of coal and mining enterprises, in particular, reducers of belt conveyors. Periodic in-depth spectral-emission diagnostics and monitoring of a temperature mode of operation oil in the operation of the control equipment and its technical condition and prevent the MTBF allows the monitoring of the actual technical condition of the gearbox of a belt conveyor. In turn, the thermal imaging diagnostics reveals defects at the earliest stage of their formation and development, which allows planning the volumes and terms of equipment repair. Presents diagnostics of the technical condition will allow monitoring in time the technical condition of the equipment and avoiding its premature failure. Thereby it will increase the energy efficiency of both the transport system and the enterprise as a whole, and also avoid unreasonable increases in operating and maintenance costs.
Battu, Raminderjit Singh; Singh, Baljeet; Kooner, Rubaljot; Singh, Balwinder
2008-04-09
An analytical method was standardized for the estimation of residues of flubendiamide and its metabolite desiodo flubendiamide in various substrates comprising cabbage, tomato, pigeonpea grain, pigeonpea straw, pigeonpea shell, chilli, and soil. The samples were extracted with acetonitrile, diluted with brine solution, and partitioned into chloroform, dried over anhydrous sodium sulfate, and treated with 500 mg of activated charcoal powder. Final clear extracts were concentrated under vacuum and reconstituted into HPLC grade acetonitrile, and residues were estimated using HPLC equipped with a UV detector at 230 lambda and a C18 column. Acetonitrile/water (60:40 v/v) at 1 mL/min was used as mobile phase. Both flubendiamide and desiodo flubendiamide presented distinct peaks at retention times of 11.07 and 7.99 min, respectively. Consistent recoveries ranging from 85 to 99% for both compounds were observed when samples were spiked at 0.10 and 0.20 mg/kg levels. The limit of quantification of the method was worked out to be 0.01 mg/kg.
Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra
Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.
Cutrignelli, Annalisa; Trapani, Adriana; Lopedota, Angela; Franco, Massimo; Mandracchia, Delia; Denora, Nunzio; Laquintana, Valentino; Trapani, Giuseppe
2011-12-01
The main aim of the present study was to estimate the carrier characteristics affecting the dissolution efficiency of griseofulvin (Gris) containing blends (BLs) using partial least squares (PLS) regression analysis. These systems were prepared at three different drug/carrier weight ratios (1/5, 1/10, and 1/20) by the solvent evaporation method, a well-established method for preparing solid dispersions (SDs). The carriers used were structurally different including polymers, a polyol, acids, bases and sugars. The BLs were characterised at the solid-state by spectroscopic (Fourier transform infrared spectroscopy), thermoanalytical (differential scanning calorimetry) and X-ray diffraction studies and their dissolution behaviours were quantified in terms of dissolution efficiencies (log DE/DE(Gris)). The correlation between the selected descriptors, including parameters for size, lipophilicity, cohesive energy density, and hydrogen bonding capacity and log DE/DE(Gris) (i.e., DE and DE(Gris) are the dissolution efficiencies of the BLs and the pure drug, respectively) was established by PLS regression analysis. Thus two models characterised by satisfactory coefficient of determination were derived. The generated equations point out that aqueous solubility, density, lipophilic/hydrophilic character, dispersive/polar forces and hydrogen bonding acceptor/donor ability of the carrier are important features for dissolution efficiency enhancement. Finally, it could be concluded that the correlations developed may be used to predict at a semiquantitative level the dissolution behaviour of BLs of other essentially neutral drugs possessing hydrogen bonding acceptor groups only.
Directory of Open Access Journals (Sweden)
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
Firoozabadi, Reza; Helfenbein, Eric D; Babaeizadeh, Saeed
2017-08-18
The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is SDSD and RMSSD. Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Amir Hossein Fallahpour
2017-02-01
Full Text Available There are numerous theoretical approaches to estimating the power conversion efficiency (PCE of organic solar cells (OSCs, ranging from the empirical approach to calculations based on general considerations of thermodynamics. Depending on the level of abstraction and model assumptions, the accuracy of PCE estimation and complexity of the calculation can change dramatically. In particular, PCE estimation with a drift-diffusion approach (widely investigated in the literature, strongly depends on the assumptions made for the physical models and optoelectrical properties of semiconducting materials. This has led to a huge deviation as well as complications in the analysis of simulated results aiming to understand the factors limiting the performance of OSCs. In this work, we intend to highlight the complex relation between mobility, exciton dynamics, nanoscale dimension, and loss mechanisms in one framework. Our systematic analysis represents key information on the sensitivity of the drift-diffusion approach, to estimate how physical parameters and physical processes bind the PCE of the device under the influence of structure, contact, and material layer properties. The obtained results ultimately led to recommendations for putting effort into certain properties to get the most out of avoidable losses, presented the impact and importance of modification of material properties, and in particular, recommended to what degree the design of new material could improve OSC performance.
Directory of Open Access Journals (Sweden)
Müller Kai F
2005-10-01
Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per
Nouvellon, Yann; Seen, Danny L.; Rambal, S.; Begue, Agnes; Moran, M. Susan; Kerr, Yann H.; Qi, Jiaguo
1998-12-01
A reliable estimation of primary production of terrestrial ecosystems is often a prerequisite for carrying out land management, while being important also in ecological and climatological studies. At a regional scale, grassland primary production estimates are increasingly being made using satellite data. In a currently used approach, regional Gross, Net and Above-ground Net Primary Productivity (GPP, NPP and ANPP) are derived from the parametric model of Monteith and are calculated as the product of the fraction of incident photosynthetically active radiation absorbed by the canopy (fAPAR) and gross, net and above-ground net production (radiation-use) efficiencies ((epsilon) g, (epsilon) n, (epsilon) an); fAPAR being derived from indices calculated from satellite measured reflectances in the red and near infrared. The accuracy and realism of the primary production values estimated by this approach therefore largely depend on an accurate estimation of (epsilon) g, (epsilon) n and (epsilon) an. However, data are scarce for production efficiencies of semi-arid grasslands, and their time and spatial variations are poorly documented, leading to often large errors on the estimates. In this paper a modeling approach taking into account relevant ecosystem processes and based on extensive field data, is used to estimate sub- seasonal and inter-annual variations of (epsilon) g, (epsilon) n and (epsilon) an of a shortgrass site of Arizona, and to quantitatively explain these variations by these of plant water stress, temperature, leaf aging, and processes such as respiration and changes in allocation pattern. For example, over the 3 study years, the mean (epsilon) g, (epsilon) n, and (epsilon) an were found to be 1.92, 0.74 and 0.29 g DM (MJ APAR)-1 respectively. (epsilon) g and epsilonn exhibited very important inter- annual and seasonal variations mainly due to different water stress conditions during the growing season. Inter-annual variations of (epsilon) an were much
Lipinski, Doug; Mohseni, Kamran
2010-03-01
A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.
Aleksandrov, V. I.; Vasilyeva, M. A.; Pomeranets, I. B.
2017-10-01
The paper presents analytical calculations of specific pressure loss in hydraulic transport of the Kachkanarsky GOK iron ore processing tailing slurry. The calculations are based on the results of the experimental studies on specific pressure loss dependence upon hydraulic roughness of pipelines internal surface lined with polyurethane coating. The experiments proved that hydraulic roughness of polyurethane coating is by the factor of four smaller than that of steel pipelines, resulting in a decrease of hydraulic resistance coefficients entered into calculating formula of specific pressure loss - the Darcy-Weisbach formula. Relative and equivalent roughness coefficients are calculated for pipelines with polyurethane coating and without it. Comparative calculations show that hydrotransport pipelines polyurethane coating application is conductive to a specific energy consumption decrease in hydraulic transport of the Kachkanarsky GOC iron ore processing tailings slurry by the factor of 1.5. The experiments were performed on a laboratory hydraulic test rig with a view to estimate the character and rate of physical roughness change in pipe samples with polyurethane coating. The experiments showed that during the following 484 hours of operation, roughness changed in all pipe samples inappreciably. As a result of processing of the experimental data by the mathematical statistics methods, an empirical formula was obtained for the calculation of operating roughness of polyurethane coating surface, depending on the pipeline operating duration with iron ore processing tailings slurry.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru [Fuel Cell System Group, Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-}0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within {+-}1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably. (author)
Energy Technology Data Exchange (ETDEWEB)
Yohei Tanaka; Akihiko Momma; Ken Kato; Akira Negishi; Kiyonami Takano; Ken Nozaki; Tohru Kato [National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki (Japan). Fuel Cell System Group, Energy Technology Research Institute
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-} 0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within 1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably.
Directory of Open Access Journals (Sweden)
Popkov V.M.
2013-03-01
Full Text Available Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF. Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by the method of HIFU have been performed. The general group of patients (n=102 has been subdivided by the method of casual distribution into two samples: group of patients with absent recurrent tumor and group of patients with the revealed recurrent tumor, by morphological research of biopsy material of residual prostate tissue after HIFU. The computer program has been used to study the signs of outcome of patients with prostate cancer. Results: Risk of development of recurrent prostate cancer has grown with the PSA level raise and its density. The index of positive biopsy columns <0,2 has shown the recurrence of prostate cancer in 17% cases while occurrence of prostate cancer in 59% cases has been determined by the index of 0,5 and higher. The tendency to obvious growth of number of relapses has been revealed by the sum of Glison raise with present perineural invasion. Cases of recurrent prostate cancer have been predominant in patients with lymphovascular invasions. In conclusion it has been worked out that the main signs of recurrent prostate cancer development may include: PSA, PSA density, the sum of Glison, lymphovascular invasion, invasion.
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual
Miyabe, Kanji; Guiochon, Georges
2011-01-01
It is probably impossible to prepare high-performance liquid chromatography (HPLC) columns that have a completely homogeneous packing structure. Many reports in the literature show that the radial distributions of the mobile phase flow velocity and the local column efficiency are not flat, even in columns considered as good. A degree of radial heterogeneity seems to be a common property of all HPLC columns and an important source of peak tailing, which prevents the derivation of accurate information on chromatographic behavior from a straightforward analysis of elution peak profiles. This work reports on a numerical method developed to derive from recorded peak profiles the column efficiency at the column center, the degree of column radial heterogeneity, and the polynomial function that best represents the radial distributions of the flow velocity and the column efficiency. This numerical method was applied to two concrete examples of tailing peak profiles previously described. It was demonstrated that this numerical method is effective to estimate important parameters characterizing the radial heterogeneity of chromatographic columns.
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2014-07-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.
El-Serehy, Hamed A.; Bahgat, Magdy M.; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2013-01-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water. PMID:24955010
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)
2017-11-02
This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...... and the heave acceleration. Resonance excitation, parametric roll and forced roll are all included in the model, albeit with some simplifications. The result is the mean out-crossing rate of the roll angle together with the corresponding most probable wave scenarios (critical wave episodes), leading to user...
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Brienen, R J W; Gloor, E; Clerici, S; Newton, R; Arppe, L; Boom, A; Bottrell, S; Callaghan, M; Heaton, T; Helama, S; Helle, G; Leng, M J; Mielikäinen, K; Oinonen, M; Timonen, M
2017-08-18
Various studies report substantial increases in intrinsic water-use efficiency (W i ), estimated using carbon isotopes in tree rings, suggesting trees are gaining increasingly more carbon per unit water lost due to increases in atmospheric CO2. Usually, reconstructions do not, however, correct for the effect of intrinsic developmental changes in W i as trees grow larger. Here we show, by comparing W i across varying tree sizes at one CO2 level, that ignoring such developmental effects can severely affect inferences of trees' W i . W i doubled or even tripled over a trees' lifespan in three broadleaf species due to changes in tree height and light availability alone, and there are also weak trends for Pine trees. Developmental trends in broadleaf species are as large as the trends previously assigned to CO2 and climate. Credible future tree ring isotope studies require explicit accounting for species-specific developmental effects before CO2 and climate effects are inferred.Intrinsic water-use efficiency (W i ) reconstructions using tree rings often disregard developmental changes in W i as trees age. Here, the authors compare W i across varying tree sizes at a fixed CO2 level and show that ignoring developmental changes impacts conclusions on trees' W i responses to CO2 or climate.
Li, Shutian; He, Ping; Jin, Jiyun
2013-03-30
Understanding the nitrogen (N) use efficiency and N input/output balance in the agricultural system is crucial for best management of N fertilisers in China. In the last 60 years, N fertiliser consumption correlated positively with grain production. During that period the partial factor productivity of N (PFPN ) declined greatly from more than 1000 kg grain kg⁻¹ N in the 1950s to nearly 30 kg grain kg⁻¹ N in 2008. This change in PFPN could be largely explained by the increase in N rate. The average agronomic efficiency of fertiliser N (AEN ) for rice, wheat and maize during 2000-2010 was 12.6, 8.3 and 11.5 kg kg⁻¹ respectively, which was similar to that in the early 1980s but lower than that in the early 1960s. Estimation based on statistical data showed that a total of 49.16 × 10⁶ t of N was input into Chinese agriculture, of which chemical N, organic fertiliser N, biological fixed N and other sources accounted for 58.2, 24.3, 10.5 and 7.0% respectively. Nitrogen was surplus in all regions, the total N surplus being 10.6 × 10⁶ t (60.6 kg ha⁻¹). The great challenge is to balance the use of current N fertilisers between regions and crops to improve N use efficiency while maintaining or increasing crop production under the high-intensity agricultural system of China. © 2012 Society of Chemical Industry.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Rui Zhang
2017-05-01
Full Text Available Estimates of regional net primary productivity (NPP are useful in modeling regional and global carbon cycles, especially in karst areas. This work developed a new method to study NPP characteristics and changes in Chongqing, a typical karst area. To estimate NPP accurately, the model which integrated an ecosystem process model (CEVSA with a light use efficiency model (GLOPEM called GLOPEM-CEVSA was applied. The fraction of photosynthetically active radiation (fPAR was derived from remote sensing data inversion based on moderate resolution imaging spectroradiometer atmospheric and land products. Validation analyses showed that the PAR and NPP values, which were simulated by the model, matched the observed data well. The values of other relevant NPP models, as well as the MOD17A3 NPP products (NPP MOD17, were compared. In terms of spatial distribution, NPP decreased from northeast to southwest in the Chongqing region. The annual average NPP in the study area was approximately 534 gC/m2a (Std. = 175.53 from 2001 to 2011, with obvious seasonal variation characteristics. The NPP from April to October accounted for 80.1% of the annual NPP, while that from June to August accounted for 43.2%. NPP changed with the fraction of absorbed PAR, and NPP was also significantly correlated to precipitation and temperature at monthly temporal scales, and showed stronger sensitivity to interannual variation in temperature.
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md Nurul Haque
2015-01-01
Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in
Directory of Open Access Journals (Sweden)
Mohammad Manir Hossain Mollah
Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large
Directory of Open Access Journals (Sweden)
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Syrejshchikova, T. I.; Gryzunov, Yu. A.; Smolina, N. V.; Komar, A. A.; Uzbekov, M. G.; Misionzhnik, E. J.; Maksimova, N. M.
2010-05-01
The efficiency of the therapy of psychiatric diseases is estimated using the fluorescence measurements of the conformational changes of human serum albumin in the course of medical treatment. The fluorescence decay curves of the CAPIDAN probe (N-carboxyphenylimide of the dimethylaminonaphthalic acid) in the blood serum are measured. The probe is specifically bound to the albumin drug binding sites and exhibits fluorescence as a reporter ligand. A variation in the conformation of the albumin molecule substantially affects the CAPIDAN fluorescence decay curve on the subnanosecond time scale. A subnanosecond pulsed laser or a Pico-Quant LED excitation source and a fast photon detector with a time resolution of about 50 ps are used for the kinetic measurements. The blood sera of ten patients suffering from depression and treated at the Institute of Psychiatry were preliminary clinically tested. Blood for analysis was taken from each patient prior to the treatment and on the third week of treatment. For ten patients, the analysis of the fluorescence decay curves of the probe in the blood serum using the three-exponential fitting shows that the difference between the amplitudes of the decay function corresponding to the long-lived (9 ns) fluorescence of the probe prior to and after the therapeutic procedure reliably differs from zero at a significance level of 1% ( p < 0.01).
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)
2017-10-05
This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
Cheng, Zhen; Jiang, Jingkun; Chen, Changhong; Gao, Jian; Wang, Shuxiao; Watson, John G; Wang, Hongli; Deng, Jianguo; Wang, Buying; Zhou, Min; Chow, Judith C; Pitchford, Marc L; Hao, Jiming
2015-01-20
Aerosol mass scattering efficiency (MSE), used for the scattering coefficient apportionment of aerosol species, is often studied under the condition of low aerosol mass loading in developed countries. Severe pollution episodes with high particle concentration frequently happened in eastern urban China in recent years. Based on synchronous measurement of aerosol physical, chemical, and optical properties at the megacity of Shanghai for two months during autumn 2012, we studied MSE characteristics at high aerosol mass loading. Their relationships with mass concentrations and size distributions were examined. It was found that MSE values from the original US IMPROVE algorithm could not represent the actual aerosol characteristics in eastern China. It results in an underestimation of the measured ambient scattering coefficient by 36%. MSE values in Shanghai were estimated to be 3.5 ± 0.55 m(2)/g for ammonia sulfate, 4.3 ± 0.63 m(2)/g for ammonia nitrate, and 4.5 ± 0.73 m(2)/g for organic matter, respectively. MSEs for three components increased rapidly with increasing mass concentration in low aerosol mass loading, then kept at a stable level after a threshold mass concentration of 12–24 μg/m(3). During severe pollution episodes, particle growth from an initial peak diameter of 200–300 nm to a peak diameter of 500–600 nm accounts for the rapid increase in MSEs at high aerosol mass loading, that is, particle diameter becomes closer to the wavelength of visible lights. This study provides insights of aerosol scattering properties at high aerosol concentrations and implies the necessity of MSE localization for extinction apportionment, especially for the polluted regions.
Fan, L Q; Bailey, D R; Shannon, N H
1995-02-01
Postweaning gain performance and individual feed intake on 271 Hereford and 263 Angus bulls were recorded during three 168-d test periods from 1984 to 1986. Each breed was composed of two lines and within each breed bulls were fed either a high-energy (HD) or a medium-energy (MD) diet. Energy intake was partitioned into energy for maintenance and growth based on predicted individual animal requirements. Estimates of heritability were obtained using Restricted Maximum Likelihood with an individual animal model including fixed effects of year, diet, and covariates of initial weight and backfat change by breed and with line effects for overall data. Bulls fed the HD grew faster and had higher metabolizable energy intake per day (MEI), residual feed consumption (RFC), and gross and net feed efficiency (FE and NFE) (P heritability for Hereford and Angus bulls, respectively, were .46 and .16 for 200-d weaning weight (WWT), .16 and .43 for average daily gain (ADG), .19 and .31 for intake per day (MEI), .43 and .45 for yearling weight (YWT), .07 and .23 for RFC, .08 and .35 for FE, and .14 and .28 for NFE. Genetic and phenotypic correlations between MEI and ADG, MEI and YWT, ADG and YWT, ADG and FE, YWT and FE, and FE and NFE were moderately to highly positive for both breeds. Negative genetic and phenotypic correlations between NFE and ADG show partial correlations of FE with ADG after accounting for energy requirement for maintenance. Residual feed consumption was negatively associated with YWT, FE, and NFE, indicating a possible genetic improvement.
Schickling, A.; Pinto, F.; Schween, J.; Damm, A.; Crewell, S.; Rascher, U.
2012-12-01
Remote sensing offers a unique possibility for a spatio-temporal investigation of carbon uptake by plant photosynthesis, commonly referred to as gross primary production. Remote sensing approaches used to quantify gross primary production are based on the light-use efficiency model of Monteith, which relates gross primary production to the absorbed photosynthetic active radiation and the efficiency of plants to utilize this radiation for photosynthesis, hence light-use efficiency. Assuming that the absorbed photosynthetic active radiation can be reliably derived from optical measurements, the estimation of the highly variable light-use efficiency remains challenging. In the last years, however, several studies indicated the sun-induced chlorophyll fluorescence as a good proxy for estimation of light-use efficiency. With this presentation we present a novel experiment setup to quantify spatio-temporal patters of light-use efficiency based on monitoring canopy sun-induced chlorophyll fluorescence. A fully automated long-term monitoring system was developed to record diurnal courses of sun-induced fluorescence of different agricultural crops and grassland. Time series of the automated system were used to evaluate temporal variations of sun-induced fluorescence and gross primary production in different ecosystems. In the near future, spatial distribution of sun-induced chlorophyll fluorescence at regional scale will be evaluated using a novel hyperspectral imaging spectrometer (HyPlant) operated from an airborne platform. We will present preliminary results from this novel spectrometer which were obtained during the vegetation period 2012.
DEFF Research Database (Denmark)
Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente
2003-01-01
Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected b...
Directory of Open Access Journals (Sweden)
Daud Jones Kachamba
2017-06-01
Full Text Available Applications of unmanned aircraft systems (UASs to assist in forest inventories have provided promising results in biomass estimation for different forest types. Recent studies demonstrating use of different types of remotely sensed data to assist in biomass estimation have shown that accuracy and precision of estimates are influenced by the size of field sample plots used to obtain reference values for biomass. The objective of this case study was to assess the influence of sample plot size on efficiency of UAS-assisted biomass estimates in the dry tropical miombo woodlands of Malawi. The results of a design-based field sample inventory assisted by three-dimensional point clouds obtained from aerial imagery acquired with a UAS showed that the root mean square errors as well as the standard error estimates of mean biomass decreased as sample plot sizes increased. Furthermore, relative efficiency values over different sample plot sizes were above 1.0 in a design-based and model-assisted inferential framework, indicating that UAS-assisted inventories were more efficient than purely field-based inventories. The results on relative costs for UAS-assisted and pure field-based sample plot inventories revealed that there is a trade-off between inventory costs and required precision. For example, in our study if a standard error of less than approximately 3 Mg ha−1 was targeted, then a UAS-assisted forest inventory should be applied to ensure more cost effective and precise estimates. Future studies should therefore focus on finding optimum plot sizes for particular applications, like for example in projects under the Reducing Emissions from Deforestation and Forest Degradation, plus forest conservation, sustainable management of forest and enhancement of carbon stocks (REDD+ mechanism with different geographical scales.
Energy Technology Data Exchange (ETDEWEB)
Stuehrenberg, Lowell; Johnson, Orlay W.
1990-03-01
During 1988, the National Marine Fisheries Service (NMFS) began a 2-year study to address possible sources of error in determining collection efficiency at McNary Dam. We addressed four objectives: determine whether fish from Columbia and Snake Rivers mix as they migrate to McNary Dam, determine whether Columbia and Snake River stocks are collected at the same rates assess whether the time of day fish are released influences their recovery rate, and determine whether guided fish used in collection efficiency estimates ten to bias results. 7 refs., 12 figs., 4 tabs.
Geel, C.; Versluis, W.; Snel, J.F.H.
1997-01-01
The relation between photosynthetic oxygen evolution and Photosystem II electron transport was investigated for the marine algae t Phaeodactylum tricornutum, Dunaliella tertiolecta, Tetraselmis sp., t Isochrysis sp. and t Rhodomonas sp.. The rate of Photosystem II electron transport was estimated
Directory of Open Access Journals (Sweden)
Seog-Chan Oh
2014-09-01
Full Text Available The car manufacturing industry, one of the largest energy consuming industries, has been making a considerable effort to improve its energy intensity by implementing energy efficiency programs, in many cases supported by government research or financial programs. While many car manufacturers claim that they have made substantial progress in energy efficiency improvement over the past years through their energy efficiency programs, the objective measurement of energy efficiency improvement has not been studied due to the lack of suitable quantitative methods. This paper proposes stochastic and deterministic frontier benchmarking models such as the stochastic frontier analysis (SFA model and the data envelopment analysis (DEA model to measure the effectiveness of energy saving initiatives in terms of the technical improvement of energy efficiency for the automotive industry, particularly vehicle assembly plants. Illustrative examples of the application of the proposed models are presented and demonstrate the overall benchmarking process to determine best practice frontier lines and to measure technical improvement based on the magnitude of frontier line shifts over time. Log likelihood ratio and Spearman rank-order correlation coefficient tests are conducted to determine the significance of the SFA model and its consistency with the DEA model. ENERGY STAR® EPI (Energy Performance Index are also calculated.
Directory of Open Access Journals (Sweden)
Edinam Dope Setsoafia
2017-01-01
Full Text Available This study evaluated the profit efficiency of artisanal fishing in the Pru District of Ghana by explicitly computing profit efficiency level, identifying the sources of profit inefficiency, and examining the constraints of artisanal fisheries. Cross-sectional data was obtained from 120 small-scale fishing households using semistructured questionnaire. The stochastic profit frontier model was used to compute profit efficiency level and identify the determinants of profit inefficiency while Garrett ranking technique was used to rank the constraints. The average profit efficiency level was 81.66% which implies that about 82% of the prospective maximum profit was gained due to production efficiency. That is, only 18% of the potential profit was lost due to the fishers’ inefficiency. Also, the age of the household head and household size increase the inefficiency level while experience in artisanal fishing tends to decrease the inefficiency level. From the Garrett ranking, access to credit facility to fully operate the small-scale fishing business was ranked as the most pressing issue followed by unstable prices while perishability was ranked last among the constraints. The study, therefore, recommends that group formation should be encouraged to enable easy access to loans and contract sales to boost profitability.
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2016-04-01
An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given r...
Arsenault, Clement
1998-01-01
Discusses the adoption of the pinyin Romanization standard over Wade-Giles and considers the impact on retrieval in online library catalogs. Describes an investigation that tested three factors that could influence retrieval efficiency: the number of usable syllables, the average number of letters per syllable, and users' familiarity with the…
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
H. Macharia (Harrison); P.P. Goos (Peter)
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the
Total-Factor Energy Efficiency in BRI Countries: An Estimation Based on Three-Stage DEA Model
Directory of Open Access Journals (Sweden)
Changhong Zhao
2018-01-01
Full Text Available The Belt and Road Initiative (BRI is showing its great influence and leadership on the international energy cooperation. Based on the three-stage DEA model, total-factor energy efficiency (TFEE in 35 BRI countries in 2015 was measured in this article. It shows that the three-stage DEA model could eliminate errors of environment variable and random, which made the result better than traditional DEA model. When environment variable errors and random errors were eliminated, the mean value of TFEE was declined. It demonstrated that TFEE of the whole sample group was overestimated because of external environment impacts and random errors. The TFEE indicators of high-income countries like South Korea, Singapore, Israel and Turkey are 1, which is in the efficiency frontier. The TFEE indicators of Russia, Saudi Arabia, Poland and China are over 0.8. And the indicators of Uzbekistan, Ukraine, South Africa and Bulgaria are in a low level. The potential of energy-saving and emissions reduction is great in countries with low TFEE indicators. Because of the gap in energy efficiency, it is necessary to distinguish different countries in the energy technology options, development planning and regulation in BRI countries.
Kozai, Toyoki
2013-01-01
Extensive research has recently been conducted on plant factory with artificial light, which is one type of closed plant production system (CPPS) consisting of a thermally insulated and airtight structure, a multi-tier system with lighting devices, air conditioners and fans, a CO2 supply unit, a nutrient solution supply unit, and an environment control unit. One of the research outcomes is the concept of resource use efficiency (RUE) of CPPS.This paper reviews the characteristics of the CPPS compared with those of the greenhouse, mainly from the viewpoint of RUE, which is defined as the ratio of the amount of the resource fixed or held in plants to the amount of the resource supplied to the CPPS.It is shown that the use efficiencies of water, CO2 and light energy are considerably higher in the CPPS than those in the greenhouse. On the other hand, there is much more room for improving the light and electric energy use efficiencies of CPPS. Challenging issues for CPPS and RUE are also discussed.
Legg, P A; Rosin, P L; Marshall, D; Morgan, J E
2013-01-01
Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of improving the estimation by means of optimal histogram bin size selection. This provokes the common question of how many bins should actually be used when constructing a histogram. There is no definitive answer to this. This question itself has received little attention in the MI literature, and yet this issue is critical to the effectiveness of the algorithm. The purpose of this paper is to highlight this fundamental element of the MI algorithm. We present a comprehensive study that introduces methods from statistics literature and incorporates these for image registration. We demonstrate this work for registration of multi-modal retinal images: colour fundus photographs and scanning laser ophthalmoscope images. The registration of these modalities offers significant enhancement to early glaucoma detection, however traditional registration techniques fail to perform sufficiently well. We find that adaptive probability density estimation heavily impacts on registration accuracy and runtime, improving over traditional binning techniques. Copyright © 2013 Elsevier Ltd. All rights reserved.
Latypov, A. F.
2008-12-01
Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the
Energy Technology Data Exchange (ETDEWEB)
Bengel, F.M.; Nekolla, S.; Schwaiger, M. [Technische Univ. Muenchen (Germany). Nuklearmedizinische Klinik und Poliklinik; Permanetter, B. [Abteilung Innere Medizin, Kreiskrankenhaus Wasserburg/Inn (Germany); Ungerer, M. [Technische Univ. Muenchen (Germany). 1. Medizinische Klinik und Poliklinik
2000-03-01
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with {sup 11}C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for {sup 11}C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%{+-}10% and end-diastolic volume was 92{+-}28 ml/m{sup 2} (vs 64%{+-}7% and 55{+-}8 ml/m{sup 2} in normals, P<0.001). Myocardial oxidative metabolism, reflected by k(mono), was significantly lower compared with that in normals (0.040{+-}0.011/min vs 0.060{+-} 0.015/min; P<0.003). The SWI (1674{+-}761 vs 4736{+-} 895 mmHg x ml/m{sup 2}; P<0.001) and the WMI as an estimate of efficiency (2.98{+-}1.30 vs 6.20{+-}2.25 x 10{sup 6} mmHg x ml/m{sup 2}; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
Directory of Open Access Journals (Sweden)
Yury G. Odegov
2016-01-01
Full Text Available In conditions of increasing competition, the problems of efficiency increase of activity of the company are significantly actualized, which directly depends on efficiency of labour activity of every employee and the implemented business model of the organization. On this basis the aim of the research is to analyze existing indicators of performance evaluation of the labour activities of both the employee and the business model of the organization.The theoretical basis of the study consists of principles of the economic theory, the works of native and foreign experts in the field of job evaluation. The information base of the research consists of economic and legal literature dealing with problems of this study, the data published in periodicals, materials of Russian scientific conferences, seminars, and Internet resources.In this article I have used and found the application of scientific methods of data collection, methods of research and methods of assessing their credibility: quantitative, comparative, logical analysis and synthesis.The modern business concern about the accumulation of wealth of shareholders, giving the company stability, growth and efficiency inevitably leads to necessity of creation and development of technologies aimed at improving the productivity of employees. The paper presents a comparative analysis of different approaches to assessing the labour effectiveness.The performance of the work is the ratio of the four essential parameters that determine the measure of efficiency of persons’ activity: the quantity and quality of result of work (a service, material product or technology in relation to spend time and cost on its production. The use of employees («performance» should be in the following way that they could achieve the planned results in the workplace. The authors have noted that to develop of technologies for the measurement of productivity it is very important to use the procedures and indicators that are
Akhmetova, I. G.; Chichirova, N. D.
2017-11-01
When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Musakhanov A.K.
2012-12-01
Full Text Available The questions of efficiency of mastering of technique of fight are considered for a capture for young judoists. Directions are selected the use of methods of the strictly regulated exercise and playing methods. In research 28 judoists took part in age 8-10 years. Duration of experiment two weeks. In one group of youths conducted game on snatching out of ribbons (clothes-pins and bandages, fastened on the kimono of opponent. In the second group work of taking of basic captures and educational meetings was conducted on a task on taking of capture. The training program contained playing methods and methods of the strictly regulated exercise. Comparison of the trainings programs defined specificity of their affecting development of different indexes of technique of fight for a capture. Recommended in training on the technique of fight for a capture the combined use of methods of the strictly regulated exercise and playing methods.
Directory of Open Access Journals (Sweden)
Riesgo Ana
2012-11-01
Full Text Available Abstract Introduction Traditionally, genomic or transcriptomic data have been restricted to a few model or emerging model organisms, and to a handful of species of medical and/or environmental importance. Next-generation sequencing techniques have the capability of yielding massive amounts of gene sequence data for virtually any species at a modest cost. Here we provide a comparative analysis of de novo assembled transcriptomic data for ten non-model species of previously understudied animal taxa. Results cDNA libraries of ten species belonging to five animal phyla (2 Annelida [including Sipuncula], 2 Arthropoda, 2 Mollusca, 2 Nemertea, and 2 Porifera were sequenced in different batches with an Illumina Genome Analyzer II (read length 100 or 150 bp, rendering between ca. 25 and 52 million reads per species. Read thinning, trimming, and de novo assembly were performed under different parameters to optimize output. Between 67,423 and 207,559 contigs were obtained across the ten species, post-optimization. Of those, 9,069 to 25,681 contigs retrieved blast hits against the NCBI non-redundant database, and approximately 50% of these were assigned with Gene Ontology terms, covering all major categories, and with similar percentages in all species. Local blasts against our datasets, using selected genes from major signaling pathways and housekeeping genes, revealed high efficiency in gene recovery compared to available genomes of closely related species. Intriguingly, our transcriptomic datasets detected multiple paralogues in all phyla and in nearly all gene pathways, including housekeeping genes that are traditionally used in phylogenetic applications for their purported single-copy nature. Conclusions We generated the first study of comparative transcriptomics across multiple animal phyla (comparing two species per phylum in most cases, established the first Illumina-based transcriptomic datasets for sponge, nemertean, and sipunculan species, and
Riesgo, Ana; Andrade, Sónia C S; Sharma, Prashant P; Novo, Marta; Pérez-Porro, Alicia R; Vahtera, Varpu; González, Vanessa L; Kawauchi, Gisele Y; Giribet, Gonzalo
2012-11-29
Traditionally, genomic or transcriptomic data have been restricted to a few model or emerging model organisms, and to a handful of species of medical and/or environmental importance. Next-generation sequencing techniques have the capability of yielding massive amounts of gene sequence data for virtually any species at a modest cost. Here we provide a comparative analysis of de novo assembled transcriptomic data for ten non-model species of previously understudied animal taxa. cDNA libraries of ten species belonging to five animal phyla (2 Annelida [including Sipuncula], 2 Arthropoda, 2 Mollusca, 2 Nemertea, and 2 Porifera) were sequenced in different batches with an Illumina Genome Analyzer II (read length 100 or 150 bp), rendering between ca. 25 and 52 million reads per species. Read thinning, trimming, and de novo assembly were performed under different parameters to optimize output. Between 67,423 and 207,559 contigs were obtained across the ten species, post-optimization. Of those, 9,069 to 25,681 contigs retrieved blast hits against the NCBI non-redundant database, and approximately 50% of these were assigned with Gene Ontology terms, covering all major categories, and with similar percentages in all species. Local blasts against our datasets, using selected genes from major signaling pathways and housekeeping genes, revealed high efficiency in gene recovery compared to available genomes of closely related species. Intriguingly, our transcriptomic datasets detected multiple paralogues in all phyla and in nearly all gene pathways, including housekeeping genes that are traditionally used in phylogenetic applications for their purported single-copy nature. We generated the first study of comparative transcriptomics across multiple animal phyla (comparing two species per phylum in most cases), established the first Illumina-based transcriptomic datasets for sponge, nemertean, and sipunculan species, and generated a tractable catalogue of annotated genes (or gene
Chen, Y-C; Clegg, R M
2011-10-01
A spectrograph with continuous wavelength resolution has been integrated into a frequency-domain fluorescence lifetime-resolved imaging microscope (FLIM). The spectral information assists in the separation of multiple lifetime components, and helps resolve signal cross-talking that can interfere with an accurate analysis of multiple lifetime processes. This extends the number of different dyes that can be measured simultaneously in a FLIM measurement. Spectrally resolved FLIM (spectral-FLIM) also provides a means to measure more accurately the lifetime of a dim fluorescence component (as low as 2% of the total intensity) in the presence of another fluorescence component with a much higher intensity. A more reliable separation of the donor and acceptor fluorescence signals are possible for Förster resonance energy transfer (FRET) measurements; this allows more accurate determinations of both donor and acceptor lifetimes. By combining the polar plot analysis with spectral-FLIM data, the spectral dispersion of the acceptor signal can be used to derive the donor lifetime - and thereby the FRET efficiency - without iterative fitting. The lifetime relation between the donor and acceptor, in conjunction with spectral dispersion, is also used to separate the FRET pair signals from the donor alone signal. This method can be applied further to quantify the signals from separate FRET pairs, and provide information on the dynamics of the FRET pair between different states. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
Gurauskiene, Inga; Stasiskiene, Zaneta
2011-07-01
Electrical and electronic equipment (EEE) has penetrated everyday life. The EEE industry is characterized by a rapid technological change which in turn prompts consumers to replace EEE in order to keep in step with innovations. These factors reduce an EEE life span and determine the exponential growth of the amount of obsolete EEE as well as EEE waste (e-waste). E-waste management systems implemented in countries of the European Union (EU) are not able to cope with the e-waste problem properly, especially in the new EU member countries. The analysis of particular e-waste management systems is essential in evaluation of the complexity of these systems, describing and quantifying the flows of goods throughout the system, and all the actors involved in it. The aim of this paper is to present the research on the regional agent based material flow analysis in e-waste management systems, as a measure to reveal the potential points for improvement. Material flow analysis has been performed as a flow of goods (EEE). The study has shown that agent-based EEE flow analysis incorporating a holistic and life cycle thinking approach in national e-waste management systems gives a broader view to the system than a common administrative one used to cover. It helps to evaluate the real efficiency of e-waste management systems and to identify relevant impact factors determining the current operation of the system.
Falandysz, Jerzy
2014-12-01
Mushroom Cortinarius caperatus is one of the several edible wild-grown species that are widely collected by fanciers. For specimens collected from 20 spatially and distantly distributed sites in Poland the median values of Hg contents of caps ranged from 0.81 to 2.4mgkg(-1) dry matter and in stipes they were 2.5-fold lower. C. caperatus efficiently accumulates Hg and the median values of the bioconcentration factor for caps range from 120 to 18 and for stipes from 47 to 7.3. This mushroom even when collected at background (uncontaminated) forested areas could be a source of elevated intake of Hg. The irregular consumption of the caps or whole fruiting bodies is not considered to pose a risk. Frequent eating of C. caperatus during the fruiting season by fanciers should be avoided because of possible health risk from Hg. Available data on Hg contents of C. caperatus from several places in Europe are also summarized. Copyright © 2014 Elsevier Inc. All rights reserved.
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-06-01
The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and
Directory of Open Access Journals (Sweden)
Tatyana V. Svishchuk
2017-06-01
manifested in the growth of savings rate and the number of competitive procedures. With economicmathematical methods the authors proved the hypothesis that the saving rate increased following the results of procurement procedures if the number of participants of the auctions and tenders increased. According to the results of the analysis proposals for improving the contract procurement system are formulated. Practical significance the proposed recommendations can be used by state customers and public authorities in procurement procedures and changes in laws and regulations in the field of public procurement with the aim of improving the efficiency of the contract system.
Directory of Open Access Journals (Sweden)
José Boaventura Magalhães Rodrigues
2017-06-01
Full Text Available Abstract Although Overall Equipment Effectiveness – OEE has been proven a useful tool to measure the efficiency of a single piece of equipment in a food processing plant it is possible to expand its concept to assess the performance of a whole production line assembled in series. This applies to the special case that all pieces of equipment are programmed to run at similar throughput of the system’s constraint. Such procedure has the advantage to allow for simpler data collection to support operations improvement strategy. This article presents an approach towards continuous improvement adapted for food processing industries that have limited budget and human resources to install and run complex automated data collection and computing systems. It proposes the use of data collected from the packing line to mimic the whole unit’s efficiency and suggests a heuristic method based on the geometric properties of OEE to define what parameters shall be targeted to plot an improvement plan. In addition, it is shown how OEE correlates with earnings, allowing for the calculation of the impact of continuous process improvement to business results. The analysis of data collected in a commercial food processing unit made possible: (i the identification of the major causes of efficiency loss by assessing the performance of packing equipment; (ii the definition of an improvement strategy to elevate OEE from 53.9% to 74.1% and; (iii the estimate that by implementing such strategy an increase of 88% on net income is attained.
Directory of Open Access Journals (Sweden)
Tianxiang Cui
2017-12-01
Full Text Available Accurately quantifying gross primary production (GPP is of vital importance to understanding the global carbon cycle. Light-use efficiency (LUE models and process-based models have been widely used to estimate GPP at different spatial and temporal scales. However, large uncertainties remain in quantifying GPP, especially for croplands. Recently, remote measurements of solar-induced chlorophyll fluorescence (SIF have provided a new perspective to assess actual levels of plant photosynthesis. In the presented study, we evaluated the performance of three approaches, including the LUE-based multi-source data synergized quantitative (MuSyQ GPP algorithm, the process-based boreal ecosystem productivity simulator (BEPS model, and the SIF-based statistical model, in estimating the diurnal courses of GPP at a maize site in Zhangye, China. A field campaign was conducted to acquire synchronous far-red SIF (SIF760 observations and flux tower-based GPP measurements. Our results showed that both SIF760 and GPP were linearly correlated with APAR, and the SIF760-GPP relationship was adequately characterized using a linear function. The evaluation of the modeled GPP against the GPP measured from the tower demonstrated that all three approaches provided reasonable estimates, with R2 values of 0.702, 0.867, and 0.667 and RMSE values of 0.247, 0.153, and 0.236 mg m−2 s−1 for the MuSyQ-GPP, BEPS and SIF models, respectively. This study indicated that the BEPS model simulated the GPP best due to its efficiency in describing the underlying physiological processes of sunlit and shaded leaves. The MuSyQ-GPP model was limited by its simplification of some critical ecological processes and its weakness in characterizing the contribution of shaded leaves. The SIF760-based model demonstrated a relatively limited accuracy but showed its potential in modeling GPP without dependency on climate inputs in short-term studies.
Directory of Open Access Journals (Sweden)
John W. Jones
2015-09-01
Full Text Available The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE, is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Veroustraete, F.; Verstraeten, W. W.
2004-12-01
Carbon emission and -fixation fluxes are key variables to guide climate change stakeholders in the use of remediation techniques as well as in the follow-up of the Kyoto protocol. A common approach to estimate forest carbon fluxes is based on the forest harvest inventory approach. However, harvest and logging inventories have their limitations in time and space. Moreover, carbon inventories are limited to the estimation of net primary productivity (NPP). Additionally, no information is available when applying inventory based methods, on the magnitude of water limitation. Finally, natural forest ecosystems are rarely included in inventory based methods. To develop a Kyoto Protocol policy support tool, a good perspective towards a generalised and methodologically consistent application is offered by expert systems based on satellite remote sensing. They estimate vegetation carbon fixation using a minimum of meteorological inputs and overcome the limitations mentioned for inventory based methods. The core module of a typical expert system is a production efficiency model. In our case we used the C-Fix model. C-Fix estimates carbon mass fluxes e.g, gross primary productivity (GPP), NPP and net ecosystem productivity (NEP) for various spatial scales and regions of interest (ROI's). Besides meteorological inputs, the C-Fix model is fed with data obtained by vegetation RTF (Radiative Transfer Model) inversion. The inversion is based on the use of look-up tables (LUT's). The LUT allows the extraction of per pixel biome type (e.g. forests) frequencies and the value of a biophysical variable and its uncertainty at the pixel level. The extraction by RTF inversion also allows a land cover fuzzy classification based on six major biomes. At the same time fAPAR is extracted and its uncertainty quantified. Based on the biome classification, radiation use efficiencies are stratified according to biome type to be used in C-Fix. Water limitation is incorporated both at the GPP level
Estimation of efficiency project management
Directory of Open Access Journals (Sweden)
Novotorov Vladimir Yurevich
2011-03-01
Full Text Available In modern conditions, the effectiveness of the enterprises all in a greater degree depends on methods of management and business dealing forms. The organizations should choose the most effective for themselves strategy of management taking into account the existing legislation, concrete conditions of activity, financial and economic, investment potential and development strategy. Introduction of common system of planning and realization of strategy of the organization, it will allow to provide even development and long-term social and economic growth of the companies.
Directory of Open Access Journals (Sweden)
Ashok Sahai
2016-02-01
Full Text Available This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30. The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’. Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091
Rosa, Filipa; Sales, Kevin C; Cunha, Bernardo R; Couto, Andreia; Lopes, Marta B; Calado, Cecília R C
2015-10-01
Reporter genes are routinely used in every laboratory for molecular and cellular biology for studying heterologous gene expression and general cellular biological mechanisms, such as transfection processes. Although well characterized and broadly implemented, reporter genes present serious limitations, either by involving time-consuming procedures or by presenting possible side effects on the expression of the heterologous gene or even in the general cellular metabolism. Fourier transform mid-infrared (FT-MIR) spectroscopy was evaluated to simultaneously analyze in a rapid (minutes) and high-throughput mode (using 96-wells microplates), the transfection efficiency, and the effect of the transfection process on the host cell biochemical composition and metabolism. Semi-adherent HEK and adherent AGS cell lines, transfected with the plasmid pVAX-GFP using Lipofectamine, were used as model systems. Good partial least squares (PLS) models were built to estimate the transfection efficiency, either considering each cell line independently (R (2) ≥ 0.92; RMSECV ≤ 2 %) or simultaneously considering both cell lines (R (2) = 0.90; RMSECV = 2 %). Additionally, the effect of the transfection process on the HEK cell biochemical and metabolic features could be evaluated directly from the FT-IR spectra. Due to the high sensitivity of the technique, it was also possible to discriminate the effect of the transfection process from the transfection reagent on KEK cells, e.g., by the analysis of spectral biomarkers and biochemical and metabolic features. The present results are far beyond what any reporter gene assay or other specific probe can offer for these purposes.
Zhang, Q.; Middleton, E.; Margolis, H.; Drolet, G.; Barr, A.; Black, T.
2008-12-01
We used daily MODIS imagery obtained over 2001-2005 to analyze the seasonal and interannual photosynthetic light use efficiency (LUE) of the Southern Old Aspen (SOA) flux tower site located near the southern limit of the boreal forest in Saskatchewan, Canada. This forest stand extends for at least 3 km in all directions from the flux tower. The MODIS daily reflectance products have resolution of 500 m at nadir and > 500 m at off-nadir. To obtain the spectral characteristics of a standardized land area to compare with tower measurements, we scaled up the nominal 500 m MODIS products to an area of 2.5 km × 2.5 km (5×5 MODIS 500 m grid cells). We then used the 5×5 scaled-up MODIS products in a coupled canopy-leaf radiative transfer model, PROSAIL-2, to estimate the fraction of photosynthetically active radiation (PAR) absorbed by the photosynthetically active part of the canopy dominated by chlorophyll (FAPARchl) versus that absorbed by the whole canopy (FAPARcanopy). From the tower measurements, we determined 90-minute averages for APAR and LUE for the physiologically active foliage (APARchl, LUEchl) and for the entire canopy (APARcanopy, LUEcanopy). The flux tower measurements of GEP were strongly related to the MODIS-derived estimates of APARchl (r2 = 0.78) but weakly related to APARcanopy (r2 = 0.33). Gross LUE (slope of GEP:APAR) between 2001 and 2005 for LUEchl was 0.0241 μ mol C μ mol -1 PPFD whereas LUEcanopy was 36% lower. Inter-annual variability in growing season (DOY 152-259) LUEchl (μ mol C μ mol -1 PPFD) ranged from 0.0225 in 2003 to 0.0310 in 2004. The five year time series of growing season LUEchl corresponded well with both the seasonal phase and amplitude of LUE from the tower measurements. We conclude that LUEchl derived from MODIS observations could provide a useful input to land surface models for improved estimates of ecosystem carbon dynamics.
Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François
2016-10-01
Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance. © 2016 INRA New Phytologist © 2016 New Phytologist Trust.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar
2014-01-01
Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal
Zheng, T.; Chen, J. M.
2016-12-01
The maximum carboxylation rate (Vcmax), despite its importance in terrestrial carbon cycle modelling, remains challenging to obtain for large scales. In this study, an attempt has been made to invert the Vcmax using the gross primary productivity from sunlit leaves (GPPsun) with the physiological basis that the photosynthesis rate for leaves exposed to high solar radiation is mainly determined by the Vcmax. Since the GPPsun can be calculated through the sunlit light use efficiency (ɛsun), the main focus becomes the acquisition of ɛsun. Previous studies using site level reflectance observations have shown the ability of the photochemical reflectance ratio (PRR, defined as the ratio between the reflectance from an effective band centered around 531nm and a reference band) in tracking the variation of ɛsun for an evergreen coniferous stand and a deciduous broadleaf stand separately and the potential of a NDVI corrected PRR (NPRR, defined as the product of NDVI and PRR) in producing a general expression to describe the NPRR-ɛsun relationship across different plant function types. In this study, a significant correlation (R2 = 0.67, p<0.001) between the MODIS derived NPRR and the site level ɛsun calculated using flux data for four Canadian flux sites has been found for the year 2010. For validation purpose, the ɛsun in 2009 for the same sites are calculated using the MODIS NPRR and the expression from 2010. The MODIS derived ɛsun matches well with the flux calculated ɛsun (R2 = 0.57, p<0.001). Same expression has then been applied over a 217 × 193 km area in Saskatchewan, Canada to obtain the ɛsun and thus GPPsun for the region during the growing season in 2008 (day 150 to day 260). The Vcmax for the region is inverted using the GPPsun and the result is validated at three flux sites inside the area. The results show that the approach is able to obtain good estimations of Vcmax values with R2 = 0.68 and RMSE = 8.8 μmol m-2 s-1.
National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...
Energy Technology Data Exchange (ETDEWEB)
Gonzales, John
2015-04-02
Presentation by Senior Engineer John Gonzales on Evaluating Investments in Natural Gas Vehicles and Infrastructure for Your Fleet using the Vehicle Infrastructure Cash-flow Estimation (VICE) 2.0 model.
Monson, D. J.
1978-01-01
Based on expected advances in technology, the maximum system efficiency and minimum specific mass have been calculated for closed-cycle CO and CO2 electric-discharge lasers (EDL's) and a direct solar-pumped laser in space. The efficiency calculations take into account losses from excitation gas heating, ducting frictional and turning losses, and the compressor efficiency. The mass calculations include the power source, radiator, compressor, fluids, ducting, laser channel, optics, and heat exchanger for all of the systems; and in addition the power conditioner for the EDL's and a focusing mirror for the solar-pumped laser. The results show the major component masses in each system, show which is the lightest system, and provide the necessary criteria for solar-pumped lasers to be lighter than the EDL's. Finally, the masses are compared with results from other studies for a closed-cycle CO2 gasdynamic laser (GDL) and the proposed microwave satellite solar power station (SSPS).
Energy Technology Data Exchange (ETDEWEB)
Karali, Nihan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-06-18
Increasing concerns on non-sustainable energy use and climate change spur a growing research interest in energy efficiency potentials in various critical areas such as industrial production. This paper focuses on learning curve aspects of energy efficiency measures in the U.S iron and steel sector. A number of early-stage efficient technologies (i.e., emerging or demonstration technologies) are technically feasible and have the potential to make a significant contribution to energy saving and CO_{2} emissions reduction, but fall short economically to be included. However, they may also have the cost effective potential for significant cost reduction and/or performance improvement in the future under learning effects such as ‘learning-by-doing’. The investigation is carried out using ISEEM, a technology oriented, linear optimization model. We investigated how steel demand is balanced with/without the availability learning curve, compared to a Reference scenario. The retrofit (or investment in some cases) costs of energy efficient technologies decline in the scenario where learning curve is applied. The analysis also addresses market penetration of energy efficient technologies, energy saving, and CO_{2} emissions in the U.S. iron and steel sector with/without learning impact. Accordingly, the study helps those who use energy models better manage the price barriers preventing unrealistic diffusion of energy-efficiency technologies, better understand the market and learning system involved, predict future achievable learning rates more accurately, and project future savings via energy-efficiency technologies with presence of learning. We conclude from our analysis that, most of the existing energy efficiency technologies that are currently used in the U.S. iron and steel sector are cost effective. Penetration levels increases through the years, even though there is no price reduction. However, demonstration technologies are not economically
Directory of Open Access Journals (Sweden)
M. Kargar
2014-12-01
Full Text Available Soil erosion and sediment production are among most important problems in developing countries including Iran. In this study it has been endeavored that applicability of four (AOF, MUSLE-S, MUSLT and USLE-M models is investigated in Srfiddasht Research Site, Semnan province, at event scale to estimate the sediment. For this, all required variables and inputs of the model have been calculated in the watershed and the estimations from considering statistical models with measured sediments of 15 cloudbursts have been compared. The results for t-student correlation test showed that there is no significant difference (at 1% between MUSLT, MUSLE-S models and measured sediment. Based on these, it can be said that in this study, the results from these two models have higher accuracies to estimate the sediment from cloudbursts than other methods. Also, the results of evaluation and efficiency of the model using Nash-Suttcliffe criterion and root relative mean squared error (RRMSE statistic showed that MUSLE-S and MUSLT models have higher efficiencies than other models and inefficiencies of USLE-M and AOF models to estimate sediments from cloudburst have been confirmed in the studied research station in this study.
Directory of Open Access Journals (Sweden)
Gonzalo González-Rey
2013-05-01
Full Text Available En el trabajo se propone un procedimiento general para estimar la eficiencia de engranajes de tornillo sinfín cilíndrico considerando pérdidas de potencia por fricción entre los flancos conjugados. El referido procedimiento tiene sus bases en dos modelos matemáticos desarrollados con relaciones teóricas y empíricas presentes en el Reporte Técnico ISO 14521. Los modelos matemáticos elaborados son orientados a evaluar la eficiencia de engranajes de tornillo sinfín cilíndrico en función de la geometría del engranaje, de condiciones de la aplicación y características de fabricación del tornillo y la rueda dentada. El procedimiento fue validado por comparación con valores de eficiencia reportados para unidades deengranajes fabricadas por una compañía especializada en engranajes. Finalmente, haciendo uso del referido procedimiento son establecidas soluciones al problema de mejorar la eficiencia de estos engranajes mediante la recomendación racional de parámetros geométricos y de explotación.Palabras claves: eficiencia, engranaje de tornillo sinfín, diseño racional, modelo matemático, ISO/TR 14521._______________________________________________________________________________AbstractIn this study, a general procedure is proposed for the prediction of cylindrical worm gear efficiency taking into account friction losses between worm and wheel gear. The procedure is based in two mathematical models developed with empiric relations and theoretical formulas presented on ISO/TR 14521. Mathematical models are oriented to evaluate the worm gear efficiency with interrelation of gear geometry, manufacturing and working parameters. The validation of procedure was achieved by comparing with values of efficiency for worm gear units referenced by a German gear manufacturer company. Finally, some important recommendations to increase worm gear efficiency by means of rational gear geometry and application parameters are presented.Key words
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Latypov, A. F.
2009-03-01
The fuel economy was estimated at boost trajectory of aerospace plane during energy supply to the free stream. Initial and final velocities of the flight were given. A model of planning flight above cold air in infinite isobaric thermal wake was used. The comparison of fuel consumption was done at optimal trajectories. The calculations were done using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was constructed in the first part of the paper for estimating the ramjet thrust and specific impulse. To estimate the aerodynamic drag of aircraft a quadratic dependence on aerodynamic lift is used. The energy for flow heating is obtained at the sacrifice of an equivalent decrease of exergy of combustion products. The dependencies are obtained for increasing the range coefficient of cruise flight at different Mach numbers. In the second part of the paper, a mathematical model is presented for the boost part of the flight trajectory of the flying vehicle and computational results for reducing the fuel expenses at the boost trajectory at a given value of the energy supplied in front of the aircraft.
El Gharamti, Mohamad
2016-11-15
This study considers the assimilation problem of subsurface contaminants at the port of Rotterdam in the Netherlands. It involves the estimation of solute concentrations and biodegradation rates of four different chlorinated solvents. We focus on assessing the efficiency of an adaptive hybrid ensemble Kalman filter and optimal interpolation (EnKF-OI) and the exact second-order sampling formulation (EnKFESOS) for mitigating the undersampling of the estimation and observation errors covariances, respectively. A multi-dimensional and multi-species reactive transport model is coupled to simulate the migration of contaminants within a Pleistocene aquifer layer located around 25 m below mean sea level. The biodegradation chain of chlorinated hydrocarbons starting from tetrachloroethene and ending with vinyl chloride is modeled under anaerobic environmental conditions for 5 decades. Yearly pseudo-concentration data are used to condition the forecast concentration and degradation rates in the presence of model and observational errors. Assimilation results demonstrate the robustness of the hybrid EnKF-OI, for accurately calibrating the uncertain biodegradation rates. When implemented serially, the adaptive hybrid EnKF-OI scheme efficiently adjusts the weights of the involved covariances for each individual measurement. The EnKFESOS is shown to maintain the parameter ensemble spread much better leading to more robust estimates of the states and parameters. On average, a well tuned hybrid EnKF-OI and the EnKFESOS respectively suggest around 48 and 21 % improved concentration estimates, as well as around 70 and 23 % improved anaerobic degradation rates, over the standard EnKF. Incorporating large uncertainties in the flow model degrades the accuracy of the estimates of all schemes. Given that the performance of the hybrid EnKF-OI depends on the quality of the background statistics, satisfactory results were obtained only when the uncertainty imposed on the background
Directory of Open Access Journals (Sweden)
M. E. Gharamti
2016-11-01
Full Text Available This study considers the assimilation problem of subsurface contaminants at the port of Rotterdam in the Netherlands. It involves the estimation of solute concentrations and biodegradation rates of four different chlorinated solvents. We focus on assessing the efficiency of an adaptive hybrid ensemble Kalman filter and optimal interpolation (EnKF-OI and the exact second-order sampling formulation (EnKFESOS for mitigating the undersampling of the estimation and observation errors covariances, respectively. A multi-dimensional and multi-species reactive transport model is coupled to simulate the migration of contaminants within a Pleistocene aquifer layer located around 25 m below mean sea level. The biodegradation chain of chlorinated hydrocarbons starting from tetrachloroethene and ending with vinyl chloride is modeled under anaerobic environmental conditions for 5 decades. Yearly pseudo-concentration data are used to condition the forecast concentration and degradation rates in the presence of model and observational errors. Assimilation results demonstrate the robustness of the hybrid EnKF-OI, for accurately calibrating the uncertain biodegradation rates. When implemented serially, the adaptive hybrid EnKF-OI scheme efficiently adjusts the weights of the involved covariances for each individual measurement. The EnKFESOS is shown to maintain the parameter ensemble spread much better leading to more robust estimates of the states and parameters. On average, a well tuned hybrid EnKF-OI and the EnKFESOS respectively suggest around 48 and 21 % improved concentration estimates, as well as around 70 and 23 % improved anaerobic degradation rates, over the standard EnKF. Incorporating large uncertainties in the flow model degrades the accuracy of the estimates of all schemes. Given that the performance of the hybrid EnKF-OI depends on the quality of the background statistics, satisfactory results were obtained only when the uncertainty imposed on
Cohn, T.A.; DeLong, L.L.; Gilroy, E.J.; Hirsch, R.M.; Wells, D.K.
1989-01-01
This paper compares the bias and variance of three procedures that can be used with log linear regression models: the traditional rating curve estimator, a modified rating curve method, and a minimum variance unbiased estimator (MVUE). Analytical derivations of the bias and efficiency of all three estimators are presented. It is shown that for many conditions the traditional and the modified estimator can provide satisfactory estimates. However, other conditions exist where they have substantial bias and a large mean square error. These conditions commonly occur when sample sizes are small, or when loads are estimated during high-flow conditions. The MVUE, however, is unbiased and always performs nearly as well or better than the rating curve estimator or the modified estimator provided that the hypothesis of the log linear model is correct. Since an efficient unbiased estimator is available, there seems to be no reason to employ biased estimators. -from Authors
Brogaard, Sara; Runnström, Micael; Seaquist, Jonathan W.
2005-03-01
Declining biological production as a part of an ongoing land degradation process is considered a severe environmental problem in the dry northern and northwestern regions of China. The aim of this study is to develop and adapt a satellite data-driven gross primary production model called Lund University light use efficiency model (LULUE) to temperate conditions in order to map gross primary production (GPP) for the Grasslands of Inner Mongolia Autonomous Region (IMAR), China, from 1982 to 1999. The water stress factor included in the original model has been complemented with two temperature stress factors. In addition, algorithms that allocate the proportions of C3/C4 photosynthetic pathways used by plants and that compute temperature-based C3 maximum efficiency values have been incorporated in the model. The applied light use efficiency (LUE) model is using time series of the Normalized Difference Vegetation Index (NDVI), CLouds from AVHRR (CLAVR) from the 8-km resolution NOAA Pathfinder Land Data Set (PAL). Quasi-daily rainfall and monthly minimum and maximum temperatures, together with soil texture information, are used to compute water limitations to plant growth. The model treats bare soil evaporation and actual transpiration separately, a refinement that is more biophysically realistic, and leads to enhanced precision in our water stress term, especially across vegetation gradients. Based on ground measurements of net primary production (NPP) at one site, the LULUE reproduces the variability of primary production better than CENTURY or NDVI alone. Mean annual GPP between 1982 and 1999 range from about 100 g/m 2 in desert regions in the west to about 4000 g/m 2 in the northeast of IMAR, and the coefficient of variation for GPP is highest near the margins of the deserts in the west where rainfall is erratic. Linear trends fitted through the 18-year time series reveal that the western regions have encountered no change, while a large area in the center of the
Ishmurzin, G P
2011-01-01
The results of observational investigation by prescription of the fixed combination perindopril + amlodipin produced by to the uncontrollable hypertension patients by the previous therapy are described in the given article. In Kazan population 50 patients at the age of 32 to 92 with essential hypertension having blood pressure level of 140/90 mm/hg. Were included into the investigation. Before these patients took different groups of antihypertension preparation (including perindopril). The preparation was prescribed in different fixed dosage depending on duration of hypertension, quantity of taken hypertension groups of preparations and blood pressure level. Then during 1, 2, 3 months of treatment the physician had the possibility to determine the dosage necessary to prescribe. The prescription of perindopril in combination with amlodipin during 3 months led to certain lowering of systolic and diastolic arterial pressure for 38.2 and 11.6 mm/hg accordingly and 80% of patients had the required pressure level of (antihypertension therapy till the end of observation. As a result, the investigation confirmed the high efficiency, good perception, improvement of patient compliance for treatment by prescribing the fixed combination perindopril + amlodipin.
Directory of Open Access Journals (Sweden)
Rumana Aslam
2017-07-01
Full Text Available In the present investigation healthy and certified seeds of Capsicum annuum were treated with five concentrations of caffeine i.e. 0.10%, 0.25%, 0.50%, 0.75% and 1.0%. Germination percentage, plants survival and pollen fertility were decreased with the increase of caffeine concentrations. Similarly root length and shoot length were decreased as the concentrations increased in M1 generation. Different mutants were isolated in M1 generation. In M2 generation, various flower mutants with changes in number of sepals, petals, anther size colour i.e. Trimerous, tetramerous, pentamerous with fused petals, hexamerous etc were segregated. Heptamerous and anther change was not observed in lower concentration viz. 0.1%. All these mutants showed significant changes in morphological characters and good breeding values at lower and intermediate concentrations. Mutagenic effectiveness and efficiency was observed on the basis of M2 flower mutant frequency. It was generally decreased with the increase of mutagen concentrations. Cytological aberrations in mutants showed the decreasing trend at meiotic final stages. These mutants were further analysed through RAPD method and on the basis of appearance of polymorphic DNA bands, they distinguished these flower mutants genotypically. Among 93 bands 44 bands were polymorphic which showed great genetic variation produced by caffeine. As an outcome of that the above caffeine concentrations are good for the induction of genetic variability in Capsicum genotype.
Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew
2009-01-01
Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.
Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee
2016-04-01
In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.
Directory of Open Access Journals (Sweden)
O. N. Korsun
2014-01-01
Full Text Available High information load of crew is one of the main problems of modern piloted aircraft therefore researches on approving data representation form, especially in critical situations are a challenge. The article considers one of opportunities to improve the interface of a modern pilot's cabin i.e. to use a spatial sound (3D - audio technology. The 3D - audio is a technology, which recreates a spatially directed sound in earphones or via loudspeakers. Spatial audio-helps, which together with information on danger will specify also the direction from which it proceeds, can reduce time of response to an event and, therefore, increase situational safety of flight. It is supposed that helps will be provided through pilot's headset therefore technology realization via earphones is discussed.Now the main hypothesis explaining the human ability to recognize the position of a sound source in space, asserts that the human estimates distortion of a sound signal spectrum at interaction with the head and an auricle depending on an arrangement of the sound source. For exact describing the signal spectrum variations there are such concepts as Head Related Impulse Response (HRIR and Head Related Transfer Function (HRTF. HRIR is measured in humans or dummies. At present the most full-scale public HRIR library is CIPIC HRTF Database of CIPIC Interface Laboratory at UC Davis.To have 3D audio effect, it is necessary to simulate a mono-signal conversion through the linear digital filters with anthropodependent pulse characteristics (HRIR for the left and right ear, which correspond to the chosen direction. Results should be united in a stereo file and applied for reproduction to the earphones.This scheme was realized in Matlab, and the received software was used for experiments to estimate the quantitative characteristics of technology. For processing and subsequent experiments the following sound signals were chosen: a fragment of the classical music piece "Polovetsky
Virtual Sensors: Efficiently Estimating Missing Spectra
National Aeronautics and Space Administration — Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding...
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V......This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...
Kraskov, A.; Stögbauer, H.; Grassberger, P.
2003-01-01
We present two classes of improved estimators for mutual information $M(X,Y)$, from samples of random points distributed according to some joint probability density $\\mu(x,y)$. In contrast to conventional estimators based on binnings, they are based on entropy estimates from $k$-nearest neighbour distances. This means that they are data efficient (with $k=1$ we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have ...
Efficiency in Microfinance Cooperatives
Directory of Open Access Journals (Sweden)
HARTARSKA, Valentina
2012-12-01
Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.
Energy Technology Data Exchange (ETDEWEB)
Asociacion de Tecnicos y Profesionistas en Aplicacion Energetica, A.C. [Mexico (Mexico)
2002-06-01
In the last years much attention has been given to the polluting gas discharges, in special of those that favor the green house effect (GHE), due to the negative sequels that its concentration causes to the atmosphere, particularly as the cause of the increase in the overall temperature of the planet, which has been denominated world-wide climatic change. There are many activities that allow to lessen or to elude the GHE gas emissions, and with the main ones the so-called projects of Energy Efficiency and Renewable Energy (EE/RE) have been structured. In order to carry out a project within the frame of the MDL, it is necessary to evaluate with quality, precision and transparency, the amount of emissions of GHE gases that are reduced or suppressed thanks to their application. For that reason, in our country we tried different methodologies directed to estimate the CO{sub 2} emissions that are attenuated or eliminated by means of the application of EE/RE projects. [Spanish] En los ultimos anos se ha puesto mucha atencion a las emisiones de gases contaminantes, en especial de los que favorecen el efecto invernadero (GEI), debido a las secuelas negativas que su concentracion ocasiona a la atmosfera, particularmente como causante del aumento en la temperatura general del planeta, en lo que se ha denominado cambio climatico mundial. Existen muchas actividades que permiten aminorar o eludir las emisiones de GEI, y con las principales se han estructurado los llamados proyectos de eficiencia energetica y energia renovables (EE/ER). Para llevar a cabo un proyecto dentro del marco del MDL, es necesario evaluar con calidad, precision y transparencia, la cantidad de emisiones de GEI que se reducen o suprimen gracias a su aplicacion. Por ello, en nuestro pais ensayamos diferentes metodologias encaminadas a estimar las emisiones de CO{sub 2} que se atenuan o eliminan mediante la aplicacion de proyectos de EE/ER.
Karacan, C Özgen
2013-07-30
Coal seam degasification and its efficiency are directly related to the safety of coal mining. Degasification activities in the Black Warrior basin started in the early 1980s by using vertical boreholes. Although the Blue Creek seam, which is part of the Mary Lee coal group, has been the main seam of interest for coal mining, vertical wellbores have also been completed in the Pratt, Mary Lee, and Black Creek coal groups of the Upper Pottsville formation to degasify multiple seams. Currently, the Blue Creek seam is further degasified 2-3 years in advance of mining using in-seam horizontal boreholes to ensure safe mining. The studied location in this work is located between Tuscaloosa and Jefferson counties in Alabama and was degasified using 81 vertical boreholes, some of which are still active. When the current long mine expanded its operation into this area in 2009, horizontal boreholes were also drilled in advance of mining for further degasification of only the Blue Creek seam to ensure a safe and a productive operation. This paper presents an integrated study and a methodology to combine history matching results from vertical boreholes with production modeling of horizontal boreholes using geostatistical simulation to evaluate spatial effectiveness of in-seam boreholes in reducing gas-in-place (GIP). Results in this study showed that in-seam wells' boreholes had an estimated effective drainage area of 2050 acres with cumulative production of 604 MMscf methane during ~2 years of operation. With horizontal borehole production, GIP in the Blue Creek seam decreased from an average of 1.52 MMscf to 1.23 MMscf per acre. It was also shown that effective gas flow capacity, which was independently modeled using vertical borehole data, affected horizontal borehole production. GIP and effective gas flow capacity of coal seam gas were also used to predict remaining gas potential for the Blue Creek seam.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
Improving efficiency in stereology
DEFF Research Database (Denmark)
Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm
2013-01-01
of the study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned......, a proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....
DEFF Research Database (Denmark)
Andersen, Rikke Sand; Vedsted, Peter
2015-01-01
on institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....
Fast fundamental frequency estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2017-01-01
Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda......Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate...... the fundamental frequency, and the maximum likelihood (ML) estimator is the most accurate estimator in statistical terms. When the noise is assumed to be white and Gaussian, the ML estimator is identical to the non-linear least squares (NLS) estimator. Despite being optimal in a statistical sense, the NLS...... estimator has a high computational complexity. In this paper, we propose an algorithm for lowering this complexity significantly by showing that the NLS estimator can be computed efficiently by solving two Toeplitz-plus-Hankel systems of equations and by exploiting the recursive-in-order matrix structures...
Coherence in quantum estimation
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
Schwickerath, U; Uria, C; CERN. Geneva. IT Department
2010-01-01
A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.
Ogurtsov, P P; Kukhareva, E I
2016-01-01
To estimate the prognostic value of the combination of blood group specificity and interleukin 28B gene polymorphism for the achievement of sustained virologic response (SVR) to antiviral therapy (AVT) with the use of pegylated interferon α-2 and ribavarin in patients with chronic genotype 1 hepatitis C (CHC-1). The secondary aim was to evaluate the influence of these genetic factors on the progress of hepatic fibrosis in case of failure of the above treatment. A total of 146 patients with CHC-1 were examined. We studied the RNA genotype of hepatitis C virus, blood group specificity, IL-28B gene polymorphism, and severity of hepatic fibrosis (puncture biopsies). Dynamics of hepatic fibrosis was followed up in 40 patients who failed to develop the virologic response. 20 control patients did not receive AVT. The multifactor significance criterion was used to identify the initial factor that produced the highest effect on SVR. SVR was observed in 56.8% of the patients. Its efficiency was most significantly influenced by. the combination of blood group specificity and interleukin 28B gene polymorphism (p = 0.000024). Combination of blood group (0)1 with C/C or T/T IL-28B genotypes, A(II) with C/T or T/T and B(III) with T/G was associated with SVR in 100, 88.2, and 94.4% cases respectively. It was absent in patients with blood group A(II) in combination with double-nucleotide substitution in rs8099917 of the IL-28B gene (TG and GG genotypes); these patients suffered progressive fibrosis. SVR occurred in 83.8% of the patients with blood group B(III). The knowledge of blood group in patients with CHC-1 and IL-28B gene polymorphism treated with the use of pegylated interferon α-2 and ribavarin allows to predict SVR with a probability of 100% in case of blood group 0(1) and C/C or T/T genotypes, 88.2% in case of blood group A(II) and single-nucleotide C>Tsubstitution in rs8099917 locus of the IL-28B gene, 94.4% in case of blood group B(II) and single-nucleotide T
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
Plataniotis K. N.
1997-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Estimating Probabilities in Recommendation Systems
Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul
2010-01-01
Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...
Das, Bishuddhananda; Maiti, Anup Kumar; Gangopadhyay, Sankar
2014-06-01
The ABCD matrix prescribed for a upside down tapered hemispherical microlens end drawn from a parabolic index circular core fiber is used to formulate analytical expression of coupling efficiency of excitation of this optical device by a laser diode. In our analysis, we assume Gaussian field distribution for both the source and the fiber. For maximum coupling efficiency, the lens transmitted spot size of the source should match with the fiber spot size. In our investigations, we employ two different laser diodes emitting wavelengths 1.3 μm and 1.5 μm respectively. It is found that the wavelength 1.5 μm is more efficient in the context of present coupling optics. Our formalism predicts the concerned coupling optics excellently and the execution of our formalism requires little computations. This simple but accurate technique is expected to benefit the system designers and packagers concerned with optimum launch optics.
Efficient ICT for efficient smart grids
Smit, Gerardus Johannes Maria
2012-01-01
In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a
Direct volume estimation without segmentation
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Attitude Estimation or Quaternion Estimation?
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...
Directory of Open Access Journals (Sweden)
Brundaban Patro
2016-03-01
Full Text Available This paper presents the study of combination tube boilers, as applicable to commercial use, along with the significant features, limitations, and applicability. A heat balance sheet is prepared to know the various heat losses in two different two-pass combination tube boilers, using low grade coal and rice husk as a fuel. Also, the efficiency of the combination tube boilers is studied by the direct and heat loss methods. It is observed that the dry flue gas loss is a major loss in the combination tube boilers. The loss due to the unburnt in the fly ash is very less in the combination tube boilers, due to the surrounded membrane wall. It is also observed that the loss due to the unburnt in the bottom ash has a considerable amount for the heat loss, and cannot be ignored.
Motor-operated gearbox efficiency
Energy Technology Data Exchange (ETDEWEB)
DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.
1996-12-01
Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.
New predictors of sleep efficiency.
Jung, Da Woon; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk
2017-01-01
Sleep efficiency is a commonly and widely used measure to objectively evaluate sleep quality. Monitoring sleep efficiency can provide significant information about health conditions. As an attempt to facilitate less cumbersome monitoring of sleep efficiency, our study aimed to suggest new predictors of sleep efficiency that enable reliable and unconstrained estimation of sleep efficiency during awake resting period. We hypothesized that the autonomic nervous system activity observed before falling asleep might be associated with sleep efficiency. To assess autonomic activity, heart rate variability and breathing parameters were analyzed for 5 min. Using the extracted parameters as explanatory variables, stepwise multiple linear regression analyses and k-fold cross-validation tests were performed with 240 electrocardiographic and thoracic volume change signal recordings to develop the sleep efficiency prediction model. The developed model's sleep efficiency predictability was evaluated using 60 piezoelectric sensor signal recordings. The regression model, established using the ratio of the power of the low- and high-frequency bands of the heart rate variability signal and the average peak inspiratory flow value, provided an absolute error (mean ± SD) of 2.18% ± 1.61% and a Pearson's correlation coefficient of 0.94 (p efficiency predictive values and the reference values. Our study is the first to achieve reliable and unconstrained prediction of sleep efficiency without overnight recording. This method has the potential to be utilized for home-based, long-term monitoring of sleep efficiency and to support reasonable decision-making regarding the execution of sleep efficiency improvement strategies.
Power Quality Indices Estimation Platform
Directory of Open Access Journals (Sweden)
Eliana I. Arango-Zuluaga
2013-11-01
Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established.
Joint DOA and DOD Estimation in Bistatic MIMO Radar without Estimating the Number of Targets
Directory of Open Access Journals (Sweden)
Zaifang Xi
2014-01-01
established without prior knowledge of the signal environment. In this paper, an efficient method for joint DOA and DOD estimation in bistatic MIMO radar without estimating the number of targets is presented. The proposed method computes an estimate of the noise subspace using the power of R (POR technique. Then the two-dimensional (2D direction finding problem is decoupled into two successive one-dimensional (1D angle estimation problems by employing the rank reduction (RARE estimator.
Isobars and the Efficient Market Hypothesis
Kristýna Ivanková
2010-01-01
Isobar surfaces, a method for describing the overall shape of multidimensional data, are estimated by nonparametric regression and used to evaluate the efficiency of selected markets based on returns of their stock market indices.
Depth estimation via stage classification
Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.
2008-01-01
We identify scene categorization as the first step towards efficient and robust depth estimation from single images. Categorizing the scene into one of the geometric classes greatly reduces the possibilities in subsequent phases. To that end, we introduce 15 typical 3D scene geometries, called
Odds Ratios Estimation of Rare Event in Binomial Distribution
Directory of Open Access Journals (Sweden)
Kobkun Raweesawat
2016-01-01
Full Text Available We introduce the new estimator of odds ratios in rare events using Empirical Bayes method in two independent binomial distributions. We compare the proposed estimates of odds ratios with two estimators, modified maximum likelihood estimator (MMLE and modified median unbiased estimator (MMUE, using the Estimated Relative Error (ERE as a criterion of comparison. It is found that the new estimator is more efficient when compared to the other methods.
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Sánchez, Ricardo J.; Jan Hoffmann; Alejandro Micco; Georgina V Pizzolitto; Martín Sgut; Gordon Wilmsmeier
2003-01-01
This paper examines the determinants of waterborne transport costs, with particular emphasis on the efficiency at port level. Its main contribution is (1) to generate statistically quantifiable measures of port efficiency from a survey of Latin American common user ports, and (2) to estimate a model of waterborne transport costs, including the previously generated port efficiency measures as explanatory variables. In order to incorporate different port efficiency measures from the survey, we ...
EPA’s Travel Efficiency Method (TEAM) AMPO Presentation
Presentation describes EPA’s Travel Efficiency Assessment Method (TEAM) assessing potential travel efficiency strategies for reducing travel activity and emissions, includes reduction estimates in Vehicle Miles Traveled in four different geographic areas.
Efficiency of seed production in southern pine seed orchards
David L. Bramlett
1977-01-01
Seed production in southern pine seed orchards can be evaluated by estimating the efficiency of four separate stages of cone, seed, and seedling development. Calculated values are: cone efficiency (CE), the ratio of mature cones to the initial flower crop; seed efficiency (SE), the ratio of filled seeds per cone to the seed potential; extraction efficiency (EE), the...
Effects of heterogeneity on bank efficiency scores
Bos, J. W. B.; Koetter, M.; Kolari, J. W.; Kool, C. J. M.
2009-01-01
Bank efficiency estimates often serve as a proxy of managerial skill since they quantify sub-optimal production choices. But such deviations can also be due to omitted systematic differences among banks. In this study, we examine the effects of heterogeneity on bank efficiency scores. We compare
Regression Estimator Using Double Ranked Set Sampling
Directory of Open Access Journals (Sweden)
Hani M. Samawi
2002-06-01
Full Text Available The performance of a regression estimator based on the double ranked set sample (DRSS scheme, introduced by Al-Saleh and Al-Kadiri (2000, is investigated when the mean of the auxiliary variable X is unknown. Our primary analysis and simulation indicates that using the DRSS regression estimator for estimating the population mean substantially increases relative efficiency compared to using regression estimator based on simple random sampling (SRS or ranked set sampling (RSS (Yu and Lam, 1997 regression estimator. Moreover, the regression estimator using DRSS is also more efficient than the naïve estimators of the population mean using SRS, RSS (when the correlation coefficient is at least 0.4 and DRSS for high correlation coefficient (at least 0.91. The theory is illustrated using a real data set of trees.
Directory of Open Access Journals (Sweden)
Douglas Sampaio Henrique
2005-06-01
Full Text Available Data of 320 animals were obtained from eight comparative slaughter studies performed under tropical conditions and used to estimate the total efficiency of utilization of the metabolizable energy intake (MEI, which varied from 77 to 419 kcal kg-0.75d-1. The provided data also contained direct measures of the recovered energy (RE, which allowed calculating the heat production (HE by difference. The RE was regressed on MEI and deviations from linearity were evaluated by using the F-test. The respective estimates of the fasting heat production and the intercept and the slope that composes the relationship between RE and MEI were 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 and 0.37. Hence, the total efficiency was estimated by dividing the net energy for maintenance and growth by the metabolizable energy intake. The estimated total efficiency of the ME utilization and analogous estimates based on the beef cattle NRC model were employed in an additional study to evaluate their predictive powers in terms of the mean square deviations for both temperate and tropical conditions. The two approaches presented similar predictive powers but the proposed one had a 22% lower mean squared deviation even with its more simplified structure.Foram utilizadas 320 informações obtidas a partir de 8 estudos de abate comparativo conduzidos em condições tropicais para se estimar a eficiência total de utilização da energia metabolizável consumida, a qual variou de 77 a 419kcal kg-0.75d-1. Os dados também continham informações sobre a energia retida (RE, o que permitiu o cálculo da produção de calor por diferença. As estimativas da produção de calor em jejum e dos coeficientes linear e angular da regressão entre RE e MEI foram respectivamente, 73 kcal kg-0.75d-1, 42 kcal kg-0.75d-1 e 0,37. Em seguida, a eficiência total foi estimada dividindo-se a energia líquida para mantença e produção pelo consumo de energia metabolizável. A eficiência total de
Optimizing lengths of confidence intervals: fourth-order efficiency in location models
Klaassen, C.; Venetiaan, S.
2010-01-01
Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation
Energy Technology Data Exchange (ETDEWEB)
Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman
2004-03-30
Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed
Electric propulsion cost estimation
Palaszewski, B. A.
1985-01-01
A parametric cost model for mercury ion propulsion modules is presented. A detailed work breakdown structure is included. Cost estimating relationships were developed for the individual subsystems and the nonhardware items (systems engineering, software, etc.). Solar array and power processor unit (PPU) costs are the significant cost drivers. Simplification of both of these subsystems through applications of advanced technology (lightweight solar arrays and high-efficiency, self-radiating PPUs) can reduce costs. Comparison of the performance and cost of several chemical propulsion systems with the Hg ion module are also presented. For outer-planet missions, advanced solar electric propulsion (ASEP) trip times and O2/H2 propulsion trip times are comparable. A three-year trip time savings over the baselined NTO/MMH propulsion system is possible with ASEP.
THE ESTIMATION OF EFFICIENCY OF THE LADLES HEATING PROCESS
Wnęk, Mariusz; Rozpondek, Maciej
2016-01-01
The paper presents a system of drying and heating the metallurgical ladles. The ladle heating parameters significantly affect the metallurgical processes. The heating process target of the ceramic ladle lining can reduce the steel temperature in the furnace. It resulted in reduction of energy consumption what is an economic benefit. Adopted drying and heating rate of the ladle depends on the ladle refractory lining - an alkaline or an aluminosilicate. The temperature field uniformity of ceram...
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G. [ICF Incorporated, LLC., Fairfax, VA (United States)
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies, other relevant attributes based on data from actual production vehicles, and recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Driver head pose estimation using efficient descriptor fusion
National Research Council Canada - National Science Library
Alioua, Nawal; Amine, Aouatif; Rogozan, Alexandrina; Bensrhair, Abdelaziz; Rziza, Mohammed
2016-01-01
.... Model-based approaches use a face geometrical model usually obtained from facial features, whereas appearance-based techniques use the whole face image characterized by a descriptor and generally...
Comparison of Vehicle Efficiency Technology Attributes and Synergy Estimates
Energy Technology Data Exchange (ETDEWEB)
Duleep, G.
2011-02-01
Analyzing the future fuel economy of light-duty vehicles (LDVs) requires detailed knowledge of the vehicle technologies available to improve LDV fuel economy. The National Highway Transportation Safety Administration (NHTSA) has been relying on technology data from a 2001 National Academy of Sciences (NAS) study (NAS 2001) on corporate average fuel economy (CAFE) standards, but the technology parameters were updated in the new proposed rulemaking (EPA and NHTSA 2009) to set CAFE and greenhouse gas standards for the 2011 to 2016 period. The update is based largely on an Environmental Protection Agency (EPA) analysis of technology attributes augmented by NHTSA data and contractor staff assessments. These technology cost and performance data were documented in the Draft Joint Technical Support Document (TSD) issued by EPA and NHTSA in September 2009 (EPA/NHTSA 2009). For these tasks, the Energy and Environmental Analysis (EEA) division of ICF International (ICF) examined each technology and technology package in the Draft TSD and assessed their costs and performance potential based on U.S. Department of Energy (DOE) program assessments. ICF also assessed the technologies? other relevant attributes based on data from actual production vehicles and from recently published technical articles in engineering journals. ICF examined technology synergy issues through an ICF in-house model that uses a discrete parameter approach.
Transverse correlation: An efficient transverse flow estimator - initial results
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Henze, Lasse; Kortbek, Jacob
2008-01-01
of vascular hemodynamics, the flow angle cannot easily be found as the angle is temporally and spatially variant. Additionally the precision of traditional methods is severely lowered for high flow angles, and they breakdown for a purely transverse flow. To overcome these problems we propose a new method...
Fast Katz and commuters : efficient estimation of social relatedness.
Energy Technology Data Exchange (ETDEWEB)
On, Byung-Won; Lakshmanan, Laks V. S.; Esfandiar, Pooya; Bonchi, Francesco; Grief, Chen; Gleich, David F.
2010-12-01
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Stochastic Frontier Estimation of Efficient Learning in Video Games
Hamlen, Karla R.
2012-01-01
Stochastic Frontier Regression Analysis was used to investigate strategies and skills that are associated with the minimization of time required to achieve proficiency in video games among students in grades four and five. Students self-reported their video game play habits, including strategies and skills used to become good at the video games…
Efficiency estimation for permanent magnets of synchronous wind generators
Directory of Open Access Journals (Sweden)
Serebryakov A.
2014-02-01
Full Text Available Pastāvīgo magnētu pielietošana vējģeneratoros paver plašas iespējas mazas un vidējas jaudas vēja enerģētisko iekārtu (VEI efektivitātes paaugstināšanai. Turklāt samazinās ģeneratoru masa, palielinās drošums, samazinās ekspluatācijas izmaksas. Tomēr, izmantojot augsti enerģētiskos pastāvīgos magnētus ģeneratoros ar paaugstinātu jaudu, rodas virkne problēmu, kuras sekmīgi iespējams pārvarēt, ja pareizi izvieto magnētus pēc to orientācijas, radot magnētisko lauku elektriskās mašīnas gaisa spraugā. Darbā ir mēģināts pierādīt, ka eksistē būtiskas priekšrocības mazas un vidējas jaudas vējģeneratoros, ja pastāvīgie magnēti tiek magnetizēti tangenciāli attiecībā pret gaisa spraugu.
Efficient Parametric Inference, Estimation and Simulation of Open Quantum Systems
DEFF Research Database (Denmark)
Gammelmark, Søren
2013-01-01
bestanddele efterhånden er eksperimentelt mulige. I denne afhandling udvikles en numerisk effektiv metode til optimal udtrækning og kvantificering af information om system-parametre for løbende observerede kvantesystemer. Vi indfører videre en udvidelse af det sædvanlige kvantemekaniske tilstandsbegreb, den...... vekselvirkninger og mange partikler kan simuleres numerisk effektivt ved hjælp af matrixprodukttilstande. De statiske og dynamiske egenskaber ved udvalgte modeller for kvantesystemer med mange partikler, der bliver løbende observeret, undersøges og deres anvendelsesmuligheder for præcisionsmålinger analyseres....
estimation of farm level technical efficiency and its determinants ...
African Journals Online (AJOL)
Eyerusalem
Sweet potato originated from Central America and spread rapidly to Asia and Africa during the 17th and 18th centuries respectively and has become an important crop in .... control of the farmer e.g. weather, disease outbreaks, measurement errors, etc. Vi is assumed to be independently and identically distributed as N(O, δv.
Estimation of biochemical variables using quantumbehaved particle ...
African Journals Online (AJOL)
Due to the difficulties in the measurement of biochemical variables in fermentation process, softsensing model based on radius basis function neural network had been established for estimating the variables. To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved ...
Interactive inverse kinematics for human motion estimation
DEFF Research Database (Denmark)
Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome
2009-01-01
We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion...... estimation system uses a single camera to estimate the motion of a human. The results show that inverse kinematics can significantly speed up the estimation process, while retaining a quality comparable to a full pose motion estimation system. Our novelty lies primarily in use of inverse kinematics...
External efficiency of schools in pre-vocational secondary education
Timmermans, A. C.; Rekers-Mombarg, L. T. M.; Vreeburg, B. A. N. M.
2016-01-01
The extent to which students from a prevocational secondary school are placed in training programmes in senior secondary vocational education that match with their abilities can be estimated by an indicator called External Efficiency. However, estimating the external efficiency of secondary schools
Efficiency in the Community College Sector: Stochastic Frontier Analysis
Agasisti, Tommaso; Belfield, Clive
2017-01-01
This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…
Efficiency wages and bargaining
Walsh, Frank
2005-01-01
I argue that in contrast to the literature to date efficiency wage and bargaining solutions will typically be independent. If the bargained wage satisfies the efficiency wage constraint efficiency wages are irrelevant. If it does not, typically we have the efficiency wage solution and bargaining is irrelevant.
Estimating the coherence of noise
Wallman, Joel
To harness the advantages of quantum information processing, quantum systems have to be controlled to within some maximum threshold error. Certifying whether the error is below the threshold is possible by performing full quantum process tomography, however, quantum process tomography is inefficient in the number of qubits and is sensitive to state-preparation and measurement errors (SPAM). Randomized benchmarking has been developed as an efficient method for estimating the average infidelity of noise to the identity. However, the worst-case error, as quantified by the diamond distance from the identity, can be more relevant to determining whether an experimental implementation is at the threshold for fault-tolerant quantum computation. The best possible bound on the worst-case error (without further assumptions on the noise) scales as the square root of the infidelity and can be orders of magnitude greater than the reported average error. We define a new quantification of the coherence of a general noise channel, the unitarity, and show that it can be estimated using an efficient protocol that is robust to SPAM. Furthermore, we also show how the unitarity can be used with the infidelity obtained from randomized benchmarking to obtain improved estimates of the diamond distance and to efficiently determine whether experimental noise is close to stochastic Pauli noise.
Technical efficiency, efficiency change, technical progress and ...
African Journals Online (AJOL)
In May 2006, the Ministers of Health of all the countries on the African continent, at a special session of the African Union, undertook to institutionalise efficiency monitoring within their respective national health information management systems. The specific objectives of this study were: (i) to assess the technical efficiency of ...
Ensemble estimators for multivariate entropy estimation.
Sricharan, Kumar; Wei, Dennis; Hero, Alfred O
2013-07-01
The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T(-)(γ)(/)(d) ), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T(-1)). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.
Stewart, R. D.
1979-01-01
Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.
Waldo, Staffan
2007-01-01
While individual data form the base for much empirical analysis in education, this is not the case for analysis of technical efficiency. In this paper, efficiency is estimated using individual data which is then aggregated to larger groups of students. Using an individual approach to technical efficiency makes it possible to carry out studies on a…
I.P. van Staveren (Irene)
2007-01-01
textabstractIntroduction. Efficiency is generally regarded as a value-neutral concept, concerned with assessing whether an economy produces at its possibility frontier, that is, generating maximum possible market output with given resources. Efficiency analysis generally rejects concerns
Energy Technology Data Exchange (ETDEWEB)
Carr, D.B.; Tolley, H.D.
1982-12-01
This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.
Energy Efficient Content Distribution
Araujo J.; Giroire F.; Liu Y; Modrzejewski R.; Moulierac J.
2016-01-01
To optimize energy efficiency in network, operators try to switch off as many network devices as possible. Recently, there is a trend to introduce content caches as an inherent capacity of network equipment, with the objective of improving the efficiency of content distribution and reducing network congestion. In this work, we study the impact of using in-network caches and CDN cooperation on an energy-efficient routing. We formulate this problem as Energy Efficient Content Distribution. The ...
Energy efficiency; Efficacite energetique
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-06-15
This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)
Wight, Jonathan B.
2017-01-01
The normative elements underlying efficiency are more complex than generally portrayed and rely upon ethical frameworks that are generally absent from classroom discussions. Most textbooks, for example, ignore the ethical differences between Pareto efficiency (based on voluntary win-win outcomes) and the modern Kaldor-Hicks efficiency used in…
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V. [ABCO Industries, Abilene, TX (United States)
1997-12-31
This paper outlines a few methods or options for generating steam efficiently in cogeneration plants when using conventional steam generators (boilers) and gas turbine Heat Recovery Steam Generators (HRSGS). By understanding the performance characteristics of these systems and operating them at their most efficient loads, steam can be generated at low cost. Suggestions are also made to improve the efficiency of existing HRSGS.
Coordination of Energy Efficiency and Demand Response
Energy Technology Data Exchange (ETDEWEB)
Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison
2010-01-29
This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
Mike
2013-03-09
Mar 9, 2013 ... evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple- trait animal model genetic evaluations and alternative genetic predictors of feed efficiency ...
Distributed fusion estimation for sensor networks with communication constraints
Zhang, Wen-An; Song, Haiyu; Yu, Li
2016-01-01
This book systematically presents energy-efficient robust fusion estimation methods to achieve thorough and comprehensive results in the context of network-based fusion estimation. It summarizes recent findings on fusion estimation with communication constraints; several novel energy-efficient and robust design methods for dealing with energy constraints and network-induced uncertainties are presented, such as delays, packet losses, and asynchronous information... All the results are presented as algorithms, which are convenient for practical applications.
Efficient flapping flight of pterosaurs
Strang, Karl Axel
In the late eighteenth century, humans discovered the first pterosaur fossil remains and have been fascinated by their existence ever since. Pterosaurs exploited their membrane wings in a sophisticated manner for flight control and propulsion, and were likely the most efficient and effective flyers ever to inhabit our planet. The flapping gait is a complex combination of motions that sustains and propels an animal in the air. Because pterosaurs were so large with wingspans up to eleven meters, if they could have sustained flapping flight, they would have had to achieve high propulsive efficiencies. Identifying the wing motions that contribute the most to propulsive efficiency is key to understanding pterosaur flight, and therefore to shedding light on flapping flight in general and the design of efficient ornithopters. This study is based on published results for a very well-preserved specimen of Coloborhynchus robustus, for which the joints are well-known and thoroughly described in the literature. Simplifying assumptions are made to estimate the characteristics that can not be inferred directly from the fossil remains. For a given animal, maximizing efficiency is equivalent to minimizing power at a given thrust and speed. We therefore aim at finding the flapping gait, that is the joint motions, that minimize the required flapping power. The power is computed from the aerodynamic forces created during a given wing motion. We develop an unsteady three-dimensional code based on the vortex-lattice method, which correlates well with published results for unsteady motions of rectangular wings. In the aerodynamic model, the rigid pterosaur wing is defined by the position of the bones. In the aeroelastic model, we add the flexibility of the bones and of the wing membrane. The nonlinear structural behavior of the membrane is reduced to a linear modal decomposition, assuming small deflections about the reference wing geometry. The reference wing geometry is computed for
Barriers to Industrial Energy Efficiency - Report to Congress, June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This report examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This report also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
Barriers to Industrial Energy Efficiency - Study (Appendix A), June 2015
Energy Technology Data Exchange (ETDEWEB)
None
2015-06-01
This study examines barriers that impede the adoption of energy efficient technologies and practices in the industrial sector, and identifies successful examples and opportunities to overcome these barriers. Three groups of energy efficiency technologies and measures were examined: industrial end-use energy efficiency, industrial demand response, and industrial combined heat and power. This study also includes the estimated economic benefits from hypothetical Federal energy efficiency matching grants, as directed by the Act.
On nonparametric hazard estimation.
Hobbs, Brian P
The Nelson-Aalen estimator provides the basis for the ubiquitous Kaplan-Meier estimator, and therefore is an essential tool for nonparametric survival analysis. This article reviews martingale theory and its role in demonstrating that the Nelson-Aalen estimator is uniformly consistent for estimating the cumulative hazard function for right-censored continuous time-to-failure data.
On nonparametric hazard estimation
Hobbs, Brian P.
2015-01-01
The Nelson-Aalen estimator provides the basis for the ubiquitous Kaplan-Meier estimator, and therefore is an essential tool for nonparametric survival analysis. This article reviews martingale theory and its role in demonstrating that the Nelson-Aalen estimator is uniformly consistent for estimating the cumulative hazard function for right-censored continuous time-to-failure data.
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...
Determinants of Bank Efficiency: Evidence from Czech Banking Sector
Rostislav Staněk
2015-01-01
The paper identifies bank-specific determinants of Czech commercial bank efficiency during the period 2000–2012. The paper employs a panel version of a stochastic efficiency frontier model with time variant efficiency to identify the impact of bank size and the structure of bank’s portfolio on the bank’s cost and profit efficiency. The results of the estimation show that bank size has no impact on cost efficiency but it negatively influences the bank’s ability to generate revenue. Cost effici...
Noise variance estimation for Kalman filter
Beniak, Ryszard; Gudzenko, Oleksandr; Pyka, Tomasz
2017-10-01
In this paper, we propose an algorithm that evaluates noise variance with a numerical integration method. For noise variance estimation, we use Krogh method with a variable integration step. In line with common practice, we limit our study to fourth-order method. First, we perform simulation tests for randomly generated signals, related to the transition state and steady state. Next, we formulate three methodologies (research hypotheses) of noise variance estimation, and then compare their efficiency.
The energy efficiency of lead selfsputtering
DEFF Research Database (Denmark)
Andersen, Hans Henrik
1968-01-01
The sputtering efficiency (i.e. ratio between sputtered energy and impinging ion energy) has been measured for 30–75‐keV lead ions impinging on polycrystalline lead. The results are in good agreement with recent theoretical estimates. © 1968 The American Institute of Physics...
Phosphate acquisition efficiency and phosphate starvation tolerance ...
Indian Academy of Sciences (India)
Phosphate availability is a major factor limiting tillering, grain filling vis-à-vis productivity of rice. Rice is often cultivated in soil like red and lateritic or acid, with low soluble phosphate content. To identify the best genotype suitable for these types of soils, P acquisition efficiency was estimated from 108 genotypes.
Efficient simulation of a tandem Jackson network
Kroese, Dirk; Nicola, V.F.
2002-01-01
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds
Efficiency of European dairy processing firms
Soboh, R.A.M.E.; Oude Lansink, A.; Dijk, van G.
2014-01-01
This paper compares the technical efficiency and production frontier of dairy processing cooperatives and investor owned firms in six major dairy producing European countries. Two parametric production frontiers are estimated, i.e. for cooperatives and investor owned firms separately, which are used
Efficiency of European Dairy Processing Firms
Soboh, R.A.M.E.; Oude Lansink, A.G.J.M.; Dijk, van G.
2014-01-01
This paper compares the technical efficiency and production frontier of dairy processing cooperativesand investor owned firms in six major dairy producing European countries. Two parametric produc-tion frontiers are estimated, i.e. for cooperatives and investor owned firms separately, which are
Moving Horizon Estimation and Control
DEFF Research Database (Denmark)
Jørgensen, John Bagterp
as the corresponding sensitivity equations are discussed. Chapter 6 summarizes the main contribution of this thesis. It briefly discusses the pros and cons of using the extended linear quadratic control framework for solution of deterministic optimal control problems. Appendices. Appendix A demonstrates how quadratic...... successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control....... An efficient structure-employing methodology for solution of the extended linear quadratic optimal control problem is provided and it is discussed how this solution is employed in solution of constrained model predictive control problems as well as in the solution of nonlinear optimal control and estimation...
Typology of efficiency of functioning of enterprise
Directory of Open Access Journals (Sweden)
I.I. Svitlyshyn
2015-03-01
Full Text Available Measuring and estimation of the efficiency of functioning of enterprises of agrarian sector traditionally performed by applying only some of its types, which focuses mainly on operating activity. Investment and financial activity, as inalienable constituent of economic process of enterprise, remain regardless thus. In addition, in scientific literature and practical activity to research of efficiency focuses on the stages «production-exchange». The stages of «distribution» and «consumption» at the level of enterprise are not examined. This distorts the results of measuring and estimation of efficiency and makes uneffective proposals for its growth. Coming from what, approach is worked out to determination and systematization of basic types of efficiency of functioning of enterprises of agrarian sector. Approach is based on the offered model, that system represents all stages and types of economic activity of the enterprise. The basic lines of efficiency are interpreted on every stage and in the cut of types of economic activity of enterprise. It allows to provide a complexity and system during its measuring and estimation.
Feedback and efficient behavior.
Casal, Sandro; DellaValle, Nives; Mittone, Luigi; Soraperra, Ivan
2017-01-01
Feedback is an effective tool for promoting efficient behavior: it enhances individuals' awareness of choice consequences in complex settings. Our study aims to isolate the mechanisms underlying the effects of feedback on achieving efficient behavior in a controlled environment. We design a laboratory experiment in which individuals are not aware of the consequences of different alternatives and, thus, cannot easily identify the efficient ones. We introduce feedback as a mechanism to enhance the awareness of consequences and to stimulate exploration and search for efficient alternatives. We assess the efficacy of three different types of intervention: provision of social information, manipulation of the frequency, and framing of feedback. We find that feedback is most effective when it is framed in terms of losses, that it reduces efficiency when it includes information about inefficient peers' behavior, and that a lower frequency of feedback does not disrupt efficiency. By quantifying the effect of different types of feedback, our study suggests useful insights for policymakers.
Feed efficiency metrics in growing pigs.
Calderón Díaz, J A; Berry, D P; Rebeiz, N; Metzler-Zebeli, B U; Magowan, E; Gardiner, G E; Lawlor, P G
2017-07-01
The objective of the present study was to quantify the interrelationships between different feed efficiency measures in growing pigs and characterize pigs divergent for a selection of these measures. The data set included data from 311 growing pigs between 42 and 91 d of age from 3 separate batches. Growth-related metrics available included midtest metabolic BW (BW), energy intake (EI), and ADG. Ratio efficiency traits included energy conversion ratio (ECR), Kleiber ratio (ADG/BW), relative growth rate (RGR), residual EI (REI), and residual daily gain (RDG). Residual intake and gain (RIG; i.e., a dual index of both REI and RDG) and residual midtest metabolic weight (RMW) were also calculated. Simple Pearson correlations were estimated between the growth and feed efficiency metrics. In litters with at least 3 pigs of each sex, pigs were separately stratified on each residual trait as high, medium, and low rank. Considerable interanimal variability existed in all metrics evaluated. Male pigs were superior to females for all metrics ( efficiency metrics improved as birth BW increased ( efficiency metrics were strong to moderate ( efficient) had lower EI and ECR and were superior for RIG ( efficient) had greater BW gain and better ECR ( Energy conversion ratio, REI, and RIG were superior ( efficient) compared with medium-RMW pigs. High-RIG pigs (i.e., more efficient) had lower EI ( efficiency traits investigated in this study were different from unity, indicating that each trait is depicting a different aspect of efficiency in pigs, although the moderate to strong correlations suggest that improvement in one trait would, on average, lead to improvements in the others. Pigs ranked as more efficient on residual traits such as REI consumed less energy for a similar BW gain, which would translate into an economic benefit for pig producers.
Efficient probability sequence
Regnier, Eva
2014-01-01
A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficient forecasting systems, including memorylessness and increasing discrimination. These results suggest tests for efficiency and ...
Roussillon, Beatrice; Schweinzer, Paul
2010-01-01
We propose a simple mechanism capable of achieving international agreement on the reduction of harmful emissions to their efficient level. It employs a contest creating incentives among participating nations to simultaneously exert efficient productive and efficient abatement efforts. Participation in the most stylised formulation of the scheme is voluntary and individually rational. All rules are mutually agreeable and are unanimously adopted if proposed. The scheme balances its budget and r...
Oil pipeline energy consumption and efficiency
Energy Technology Data Exchange (ETDEWEB)
Hooker, J.N.
1981-01-01
This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movements over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...
Estimating state-contingent production functions
DEFF Research Database (Denmark)
Rasmussen, Svend; Karantininis, Kostas
The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...... be useful, but that further analysis is needed to evaluate the efficiency of this estimation method compared to traditional methods....
Energy efficiency governance an emerging priority
Energy Technology Data Exchange (ETDEWEB)
Jollands, Nigel (Energy Efficiency and Environment Div., International Energy Agency, Paris (France)); Ellis, Mark (Mark Ellis and Associates, Wagstaffe, NSW (Australia))
2009-07-01
End-use energy efficiency is widely accepted as providing least-cost solutions to greenhouse gas mitigation and energy supply. However, maximising this potential resource is proving difficult, even elusive. Despite widespread energy efficiency policies covering many sectors, most evaluations show that we are falling well short of the potential level of energy efficiency. One reason for this is that estimates of potential tend to cover the whole of an economy or large sectors, whereas policy measures tend to be targeted towards individual, smaller parts of the economy. The authors contend that the energy efficiency potential within an economy will not be maximised without understanding the complete governance framework the central mechanism for marshalling drivers within the public and private sectors of an economy and ensuring that it is aligned towards energy efficiency. This paper explores the rationale for focusing on energy efficiency governance. In doing so, we attempt to define energy efficiency governance and present a conceptual framework for the better understanding of the issues involved. After reviewing the available literature in this field we present a description of the IEA governance programme of work that aims to assist countries to establish the most effective energy efficiency institutional structures at national and local levels.
Landscaping for energy efficiency
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-04-01
This publication by the National Renewable Energy Laboratory addresses the use of landscaping for energy efficiency. The topics of the publication include minimizing energy expenses; landscaping for a cleaner environment; climate, site, and design considerations; planning landscape; and selecting and planting trees and shrubs. A source list for more information on landscaping for energy efficiency and a reading list are included.
Institutions, Equilibria and Efficiency
DEFF Research Database (Denmark)
Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria...
DEFF Research Database (Denmark)
Zhang, Tian
2015-01-01
approach to make photosynthesis more efficient is to build hybrid systems that combine inorganic and microbial components to produce specific chemicals. Such hybrid bioinorganic systems lead to improved efficiency and specificity and do not require processed vegetable biomass. They thus prevent harmful...
Energy Efficiency Collaboratives
Energy Technology Data Exchange (ETDEWEB)
Li, Michael [US Department of Energy, Washington, DC (United States); Bryson, Joe [US Environmental Protection Agency, Washington, DC (United States)
2015-09-01
Collaboratives for energy efficiency have a long and successful history and are currently used, in some form, in more than half of the states. Historically, many state utility commissions have used some form of collaborative group process to resolve complex issues that emerge during a rate proceeding. Rather than debate the issues through the formality of a commission proceeding, disagreeing parties are sent to discuss issues in a less-formal setting and bring back resolutions to the commission. Energy efficiency collaboratives take this concept and apply it specifically to energy efficiency programs—often in anticipation of future issues as opposed to reacting to a present disagreement. Energy efficiency collaboratives can operate long term and can address the full suite of issues associated with designing, implementing, and improving energy efficiency programs. Collaboratives can be useful to gather stakeholder input on changing program budgets and program changes in response to performance or market shifts, as well as to provide continuity while regulators come and go, identify additional energy efficiency opportunities and innovations, assess the role of energy efficiency in new regulatory contexts, and draw on lessons learned and best practices from a diverse group. Details about specific collaboratives in the United States are in the appendix to this guide. Collectively, they demonstrate the value of collaborative stakeholder processes in producing successful energy efficiency programs.
Donckers, L.; Smit, Gerardus Johannes Maria; Havinga, Paul J.M.; Smit, L.T.
This paper describes the design of an energy-efficient transport protocol for mobile wireless communication. First we describe the metrics used to measure the energy efficiency of transport protocols. We identify several problem areas that prevent TCP/IP from reaching high levels of energy
Oenema, O.
2015-01-01
There is a need for communications about resource use efficiency and for measures to increase the use efficiency of nutrients in relation to food production. This holds especially for nitrogen. Nitrogen (N) is essential for life and a main nutrient element. It is needed in relatively large
Blind Reverberation Time Estimation Based on Laplace Distribution
Jan, Tariqullah; Wang, Wenwu
2012-01-01
We propose an algorithm for the estimation of reverberation time (RT) from the reverberant speech signal by using a maximum likelihood (ML) estimator. Based on the analysis of an existing RT estimation method, which models the reverberation decay as a Gaussian random process modulated by a deterministic envelope, a Laplacian distribution based decay model is proposed in which an efficient procedure for locating free decay from reverberant speech is also incorporated. Then the RT is estimated ...
Optimal fault signal estimation
Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.
2002-01-01
We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By
New approaches to estimation of magnetotelluric parameters
Energy Technology Data Exchange (ETDEWEB)
Egbert, G.D.
1991-01-01
Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi
2017-04-12
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
On Frequency Domain Models for TDOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll
2015-01-01
of a much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number...
Comparative analysis of groundwater recharge estimation Value ...
African Journals Online (AJOL)
Estimation of natural groundwater recharge is a pre-requisite for efficient groundwater resource management especially in regions with large demands for groundwater supplies, where such resources are the key to economic development. Groundwater recharge, by whatever method, is normally subjected to large ...
VERTICAL ACTIVITY ESTIMATION USING 2D RADAR
African Journals Online (AJOL)
hennie
1 D. E. Manolakis. Efficient solution and performance analysis of 3-d position estimation by trilateration. IEEE Transactions on Aerospace and Electronic Systems, volume 32(4), pages 1239–1248, October 1996. 2 D. E. Manolakis. Aircraft vertical profile prediction based on surveillance data only. IEE Proceedings on Radar, ...
Fast, Continuous Audiogram Estimation using Machine Learning
Song, Xinyu D.; Wallace, Brittany M.; Gardner, Jacob R.; Ledbetter, Noah M.; Weinberger, Kilian Q.; Barbour, Dennis L.
2016-01-01
Objectives Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study is to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and 1 repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably to those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry. PMID
Learning efficient correlated equilibria
Borowski, Holly P.
2014-12-15
The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents\\' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.
The Efficient Windows Collaborative
Energy Technology Data Exchange (ETDEWEB)
Petermann, Nils
2006-03-31
The Efficient Windows Collaborative (EWC) is a coalition of manufacturers, component suppliers, government agencies, research institutions, and others who partner to expand the market for energy efficient window products. Funded through a cooperative agreement with the U.S. Department of Energy, the EWC provides education, communication and outreach in order to transform the residential window market to 70% energy efficient products by 2005. Implementation of the EWC is managed by the Alliance to Save Energy, with support from the University of Minnesota and Lawrence Berkeley National Laboratory.
Fractal stock markets: International evidence of dynamical (in)efficiency
Bianchi, Sergio; Frezza, Massimiliano
2017-07-01
The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 1/2 . We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.
Meneghelli, Barry J.; Notardonato, William; Fesmire, James E.
2016-01-01
The Cryogenics Test Laboratory, NASA Kennedy Space Center, works to provide practical solutions to low-temperature problems while focusing on long-term technology targets for the energy-efficient use of cryogenics on Earth and in space.
Directory of Open Access Journals (Sweden)
Branka Gvozdenac-Urošević
2010-01-01
Full Text Available Improving energy efficiency can be powerful tool for achieving sustainable economic development and most important for reducing energy consumption and environmental pollution on national level. Unfortunately, energy efficiency is difficult to conceptualize and there is no single commonly accepted definition. Because of that, measurement of achieved energy efficiency and its impact on national or regional economy is very complicated. Gross Domestic Product (GDP is often used to assess financial effects of applied energy efficiency measures at the national and regional levels. The growth in energy consumption per capita leads to a similar growth in GDP, but it is desirable to provide for the fall of these values. The paper analyzes some standard indicators and the analysis has been applied to a very large sample ensuring reliability for conclusion purposes. National parameters for 128 countries in the world in 2007 were analyzed. In addition to that, parameters were analyzed in the last years for global regions and Serbia.
Energy Technology Data Exchange (ETDEWEB)
NONE
2010-07-01
Transport is the sector with the highest final energy consumption and, without any significant policy changes, is forecast to remain so. In 2008, the IEA published 25 energy efficiency recommendations, among which four are for the transport sector. The recommendations focus on road transport and include policies on improving tyre energy efficiency, fuel economy standards for both light-duty vehicles and heavy-duty vehicles, and eco-driving. Implementation of the recommendations has been weaker in the transport sector than others. This paper updates the progress that has been made in implementing the transport energy efficiency recommendations in IEA countries since March 2009. Many countries have in the last year moved from 'planning to implement' to 'implementation underway', but none have fully implemented all transport energy efficiency recommendations. The IEA calls therefore for full and immediate implementation of the recommendations.
Institutions, Equilibria and Efficiency
DEFF Research Database (Denmark)
Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria in such set......Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria...... in such settings is proven under very general preference assumptions. The model is extended to include geographical location choice, a commodity space incorporating manufacturing imprecision and preferences for club-membership, schools and firms. Inefficiencies arising from household externalities or group...... in OLG, learning in OLG and in games, optimal pricing of derivative securities, the impact of heterogeneity...
Efficient Vocabulary Testing Techniques
Directory of Open Access Journals (Sweden)
L. V. Mykhailiuk
2016-12-01
Full Text Available The article deals with the problem of teaching vocabulary. Different aspects of vocabulary (pronunciation, spelling, grammar, collocation, meaning, word formation are considered alongside with efficient vocabulary testing techniques.
Efficient probability sequences
Regnier, Eva
2014-01-01
DRMI working paper A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficiency forecasting systems, including memorylessness and increasing discrimination. These res...
Efficient incremental relaying
Fareed, Muhammad Mehboob
2013-07-01
We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.
Centrifugal Contactor Efficiency Measurements
Energy Technology Data Exchange (ETDEWEB)
Mincher, Bruce Jay [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tillotson, Richard Dean [Idaho National Lab. (INL), Idaho Falls, ID (United States); Grimes, Travis Shane [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-01-01
The contactor efficiency of a 2-cm acrylic centrifugal contactor, fabricated by ANL using 3D printer technology was measured by comparing a contactor test run to 5-min batch contacts. The aqueous phase was ~ 3 ppm depleted uranium in 3 M HNO3, and the organic phase was 1 M DAAP/dodecane. Sampling during the contactor run showed that equilibrium was achieved within < 3 minutes. The contactor efficiency at equilibrium was 95% to 100 %, depending on flowrate.
Prachýl, Lukáš
2010-01-01
The thesis describes and analyzes shared services organizations as a management tool to achieve efficiency in the organizations' processes. Paper builds on established theoretical principles, enhance them with up-to-date insights on the current situation and development and create a valuable knowledge base on shared services organizations. Strong emphasis is put on concrete means on how exactly efficiency could be achieved. Major relevant topics such as reasons for shared services, people man...
Anaerobic energy expenditure and mechanical efficiency during exhaustive leg press exercise
DEFF Research Database (Denmark)
Gorostiaga, Esteban M.; Navarro-Amézqueta, Ion; Cusso, Roser
2010-01-01
Information about anaerobic energy production and mechanical efficiency that occurs over time during short-lasting maximal exercise is scarce and controversial. Bilateral leg press is an interesting muscle contraction model to estimate anaerobic energy production and mechanical efficiency during ...
Feedback and efficient behavior.
Directory of Open Access Journals (Sweden)
Sandro Casal
Full Text Available Feedback is an effective tool for promoting efficient behavior: it enhances individuals' awareness of choice consequences in complex settings. Our study aims to isolate the mechanisms underlying the effects of feedback on achieving efficient behavior in a controlled environment. We design a laboratory experiment in which individuals are not aware of the consequences of different alternatives and, thus, cannot easily identify the efficient ones. We introduce feedback as a mechanism to enhance the awareness of consequences and to stimulate exploration and search for efficient alternatives. We assess the efficacy of three different types of intervention: provision of social information, manipulation of the frequency, and framing of feedback. We find that feedback is most effective when it is framed in terms of losses, that it reduces efficiency when it includes information about inefficient peers' behavior, and that a lower frequency of feedback does not disrupt efficiency. By quantifying the effect of different types of feedback, our study suggests useful insights for policymakers.
Efficient Windows Collaborative
Energy Technology Data Exchange (ETDEWEB)
Nils Petermann
2010-02-28
The project goals covered both the residential and commercial windows markets and involved a range of audiences such as window manufacturers, builders, homeowners, design professionals, utilities, and public agencies. Essential goals included: (1) Creation of 'Master Toolkits' of information that integrate diverse tools, rating systems, and incentive programs, customized for key audiences such as window manufacturers, design professionals, and utility programs. (2) Delivery of education and outreach programs to multiple audiences through conference presentations, publication of articles for builders and other industry professionals, and targeted dissemination of efficient window curricula to professionals and students. (3) Design and implementation of mechanisms to encourage and track sales of more efficient products through the existing Window Products Database as an incentive for manufacturers to improve products and participate in programs such as NFRC and ENERGY STAR. (4) Development of utility incentive programs to promote more efficient residential and commercial windows. Partnership with regional and local entities on the development of programs and customized information to move the market toward the highest performing products. An overarching project goal was to ensure that different audiences adopt and use the developed information, design and promotion tools and thus increase the market penetration of energy efficient fenestration products. In particular, a crucial success criterion was to move gas and electric utilities to increase the promotion of energy efficient windows through demand side management programs as an important step toward increasing the market share of energy efficient windows.
Analysis of factors affecting the technical efficiency of cocoa ...
African Journals Online (AJOL)
The study estimated the technical efficiency of cocoa producers and the socioeconomic factors influencing technical efficiency and identified the constraints to cocoa production. A multi-stage random sampling method was used to select 180 cocoa farmers who were interviewed for the study. Data on the inputs used and ...
Efficiency of the Primary and Secondary Schools in Sweden.
Heshmati, Almas; Kumbhakar, Subal C.
1997-01-01
The efficiency of 286 Swedish municipalities in the provision of elementary and secondary school education in 1993-94 was studied through stochastic frontier production with the estimation of production- and cost-function models. Empirical results show that most operate at 85 to 100% efficiency. (SLD)
Measurement of dynamic efficiency: a directional distance function parametric approach
Serra, T.; Oude Lansink, A.G.J.M.; Stefanou, S.E.
2011-01-01
This research proposes a parametric estimation of the structural dynamic efficiency measures proposed by Silva and Oude Lansink (2009). Overall, technical and allocative efficiency measurements are derived based on a directional distance function and the duality between this function and the optimal
Methods of multicriterion estimations in system total quality management
Directory of Open Access Journals (Sweden)
Nikolay V. Diligenskiy
2011-05-01
Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.
Hydraulic efficiency of a Rushton turbine impeller
Chara, Z.; Kysela, B.; Fort, I.
2017-07-01
Based on CFD simulations hydraulic efficiency of a standard Rushton turbine impeller in a baffled tank was determined at a Reynolds number of ReM=33330. Instantaneous values of pressure and velocity components were used to draw up the macroscopic balance of the mechanical energy. It was shown that the hydraulic efficiency of the Rushton turbine impeller (energy dissipated in a bulk volume) is about 57%. Using this result we estimated a length scale in a non-dimensional equation of kinetic energy dissipation rate in the bulk volume as L=D/2.62.
Production and efficiency analysis with R
Behr, Andreas
2015-01-01
This textbook introduces essential topics and techniques in production and efficiency analysis and shows how to apply these methods using the statistical software R. Numerous small simulations lead to a deeper understanding of random processes assumed in the models and of the behavior of estimation techniques. Step-by-step programming provides an understanding of advanced approaches such as stochastic frontier analysis and stochastic data envelopment analysis. The text is intended for master students interested in empirical production and efficiency analysis. Readers are assumed to have a general background in production economics and econometrics, typically taught in introductory microeconomics and econometrics courses.
Wang, Ming; Kong, Lan; Li, Zheng; Zhang, Lijun
2016-05-10
Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance-covariance matrix of the regression parameter coefficients is usually estimated by a robust "sandwich" variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias-correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small-sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t-tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user-friendly R package "geesmv" incorporating all of these variance estimators for public usage in practice. Copyright © 2015 John Wiley & Sons, Ltd.
Benchmarking the production of audit services: an efficiency frontier approach
Schelleman, C.C.M.; Maijoor, S.J.
2000-01-01
To compete effectively in an increasingly competitive audit market audit firms need information on the efficiency of the audit services they offer. This study reports on the cost and labor efficiency for a sample of 114 audit engagements conducted by one of the (then) Big 6 audit firms. Estimating the efficiency of audit engagements is a form of benchmarking, of which economics oriented research has seen many applications. The application to auditing however is, as far as we know, relatively ...
Efficiency and technical change in the Western Australian wheatbelt
Cattle, Nathan; White, Benedict
2007-01-01
The production performance of wheatbelt farms in Western Australia is analysed to determine whether potential to exploit scale economies and improve technical efficiency has driven the trend towards increased farm size. An input-orientated stochastic frontier model is used to estimate technical efficiency and scale economies using an unbalanced panel dataset provided by BankWest for the period 1995/1996 to 2005/2006. Differences in the relative efficiency of farms are explored by the simultan...
Efficiency of Infrastructure : The Case of Container Ports
Pang, Gaobo; Herrera, Santiago
2008-01-01
This paper gauges efficiency in container ports. Using non-parametric methods, we estimate efficiency frontiers based on information from 86 ports across the world. Three attractive features of the method are: 1) it is based on an aggregated measure of efficiency despite the existence of multiple inputs; 2) it does not assume particular input-output functional relationships; and 3) it does not rely on a priori peer selection to construct the benchmark. Results show that the most inefficient p...
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Efficient Simulation of the Outage Probability of Multihop Systems
Ben Issaid, Chaouki
2017-10-23
In this paper, we present an efficient importance sampling estimator for the evaluation of the outage probability of multihop systems with amplify-and-forward channel state-information-assisted. The proposed estimator is endowed with the bounded relative error property. Simulation results show a significant reduction in terms of number of simulation runs compared to naive Monte Carlo.
Sabuj Kumar Mandal; S Madheswaran
2009-01-01
The present paper aims at measuring energy use efficiency in Indian cement industry and estimating the factors explaining inter-firm variations in energy use efficiency. Within the framework of production theory, Data Envelopment Analysis (DEA) and directional distance function (DDF) have been used to measure energy use efficiency. Using data from electronic CMIE PROWESS data base for the years 1989-90 through 2006-07, the study first estimates energy efficiency and then compares the energy e...
Instrumental Variable Estimation with Heteroskedasticity and Many Instruments
Jerry A. Hausman; Newey, Whitney K.; Woutersen, Tiemen; Chao, John; Swanson, Norman
2009-01-01
This paper gives a relatively simple, well behaved solution to the problem of many instruments in heteroskedastic data. Such settings are common in microeconometric applications where many instruments are used to improve efficiency and allowance for heteroskedasticity is generally important. The solution is a Fuller (1977) like estimator and standard errors that are robust to heteroskedasticity and many instruments. We show that the estimator has finite moments and high asymptotic efficiency ...
Galbraith, Craig S.; Merrill, Gregory B.
2015-01-01
We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…
Supernovae Discovery Efficiency
John, Colin
2018-01-01
Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.
Measuring cardiac efficiency using PET/MRI
Energy Technology Data Exchange (ETDEWEB)
Gullberg, Grand [Lawrence Berkeley National Laboratory (United States); Aparici, Carina Mari; Brooks, Gabriel [University of California San Francisco (United States); Liu, Jing; Guccione, Julius; Saloner, David; Seo, Adam Youngho; Ordovas, Karen Gomes [Lawrence Berkeley National Laboratory (United States)
2015-05-18
Heart failure (HF) is a complex syndrome that is projected by the American Heart Association to cost $160 billion by 2030. In HF, significant metabolic changes and structural remodeling lead to reduced cardiac efficiency. A normal heart is approximately 20-25% efficient measured by the ratio of work to oxygen utilization (1 ml oxygen = 21 joules). The heart requires rapid production of ATP where there is complete turnover of ATP every 10 seconds with 90% of ATP produced by mitochondrial oxidative metabolism requiring substrates of approximately 30% glucose and 65% fatty acids. In our preclinical PET/MRI studies in normal rats, we showed a negative correlation between work and the influx rate constant for 18FDG, confirming that glucose is not the preferred substrate at rest. However, even though fatty acid provides 9 kcal/gram compared to 4 kcal/gram for glucose, in HF the preferred energy source is glucose. PET/MRI offers the potential to study this maladapted mechanism of metabolism by measuring work in a region of myocardial tissue simultaneously with the measure of oxygen utilization, glucose, and fatty acid metabolism and to study cardiac efficiency in the etiology of and therapies for HF. MRI is used to measure strain and a finite element mechanical model using pressure measurements is used to estimate myofiber stress. The integral of strain times stress provides a measure of work which divided by energy utilization, estimated by the production of 11CO2 from intravenous injection of 11C-acetate, provides a measure of cardiac efficiency. Our project involves translating our preclinical research to the clinical application of measuring cardiac efficiency in patients. Using PET/MRI to develop technologies for studying myocardial efficiency in patients, provides an opportunity to relate cardiac work of specific tissue regions to metabolic substrates, and measure the heterogeneity of LV efficiency.
Henningsson, Per; Bomphrey, Richard J
2013-07-06
Flight in animals is the result of aerodynamic forces generated as flight muscles drive the wings through air. Aerial performance is therefore limited by the efficiency with which momentum is imparted to the air, a property that can be measured using modern techniques. We measured the induced flow fields around six hawkmoth species flying tethered in a wind tunnel to assess span efficiency, ei, and from these measurements, determined the morphological and kinematic characters that predict efficient flight. The species were selected to represent a range in wingspan from 40 to 110 mm (2.75 times) and in mass from 0.2 to 1.5 g (7.5 times) but they were similar in their overall shape and their ecology. From high spatio-temporal resolution quantitative wake images, we extracted time-resolved downwash distributions behind the hawkmoths, calculating instantaneous values of ei throughout the wingbeat cycle as well as multi-wingbeat averages. Span efficiency correlated positively with normalized lift and negatively with advance ratio. Average span efficiencies for the moths ranged from 0.31 to 0.60 showing that the standard generic value of 0.83 used in previous studies of animal flight is not a suitable approximation of aerodynamic performance in insects.
Energy Technology Data Exchange (ETDEWEB)
Kaya, Durmus; Yagmur, E. Alptekin [TUBITAK-MRC, P.O. Box 21, 41470 Gebze, Kocaeli (Turkey); Yigit, K. Suleyman; Eren, A. Salih; Celik, Cenk [Engineering Faculty, Kocaeli University, Kocaeli (Turkey); Kilic, Fatma Canka [Department of Air Conditioning and Refrigeration, Kocaeli University, Kullar, Kocaeli (Turkey)
2008-06-15
In this paper, ''energy efficiency'' studies, done in a big industrial facility's pumps, are reported. For this purpose; the flow rate, pressure and temperature have been measured for each pump in different operating conditions and at maximum load. In addition, the electrical power drawn by the electric motor has been measured. The efficiencies of the existing pumps and electric motor have been calculated by using the measured data. Potential energy saving opportunities have been studied by taking into account the results of the calculations for each pump and electric motor. As a conclusion, improvements should be made each system. The required investment costs for these improvements have been determined, and simple payback periods have been calculated. The main energy saving opportunities result from: replacements of the existing low efficiency pumps, maintenance of the pumps whose efficiencies start to decline at certain range, replacements of high power electric motors with electric motors that have suitable power, usage of high efficiency electric motors and elimination of cavitation problems. (author)
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...
Heemstra, F.J.; Heemstra, F.J.
1993-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
DEFF Research Database (Denmark)
Bollerslev, Tim; Todorov, Victor
We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...
Anderson, John B
2017-01-01
Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...... estimator automatically compensates for the axial velocity, when determining the transverse velocity by using fourth order moments rather than second order moments. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by using averaging...... of RF samples. Further, compensation for the axial velocity can be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce spatial velocity dispersion....
Transverse Spectral Velocity Estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2014-01-01
array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile...... flow using the Womersly–Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer....... A pump generated artificial femoral and carotid artery flow in the phantom. The estimated spectra degrade when the angle is different from 90°, but are usable down to 60° to 70°. Below this angle the traditional spectrum is best and should be used. The conventional approach can automatically be corrected...
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating the parame......We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The fi…rst step consists in estimating...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...
DEFF Research Database (Denmark)
2015-01-01
A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....
Efficiency and productivity in pig nutrition
Mosenthin, Rainer
2011-01-01
The efficient use of feed ingredients in diets for pigs is an important determinant of the productivity in modern pig production systems. Thus, there is a need to accurately estimate the feeding value of various feed ingredients. Several factors have to be considered for the adequate nutritional evaluation of feedstuffs. These include information (i) on the content of energy yielding nutrients (e.g. starch, sugars, lipids, protein), (ii) the digestibility and post absorptive utilization of nu...
Energy efficiency; Energieffektivisering
Energy Technology Data Exchange (ETDEWEB)
2009-06-15
The Low Energy Panel will halve the consumption in buildings. The Panel has proposed a halving of consumption in the construction within 2040 and 20 percent reduction in the consumption in the industry within 2020. The Panel consider it as possible to gradually reduce consumption in buildings from the current level of 80 TWh with 10 TWh in 2020, 25 TWh in 2030 and 40 TWh in 2040. According the committee one such halving can be reached by significant efforts relating to energy efficiency, by greater rehabilitations, energy efficiency in consisting building stock and stricter requirements for new construction. For the industry field the Panel recommend a political goal to be set at least 20 percent reduction in specific energy consumption in the industry and primary industry beyond general technological development by the end of 2020. This is equivalent to approximately 17 TWh based on current level of activity. The Panel believes that a 5 percent reduction should be achieved by the end of 2012 by carrying out simple measures. The Low Energy Panel has since March 2009 considered possibilities to strengthen the authorities' work with energy efficiency in Norway. The wide complex panel adds up proposals for a comprehensive approach for increased energy efficiency in particular in the building- and industry field. The Panel has looked into the potential for energy efficiency, barriers for energy efficiency, assessment of strengths and weaknesses in the existing policy instruments and members of the Panel's recommendations. In addition the report contains a review of theoretical principles for effects of instruments together with an extensive background. One of the committee members have chosen to take special notes on the main recommendations in the report. (AG)
Estimates of variance components for postweaning feed intake and ...
African Journals Online (AJOL)
The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait animal model genetic evaluations and alternative genetic predictors of ...
The Energy Efficient Enterprise
Energy Technology Data Exchange (ETDEWEB)
Ahmad, Bashir
2010-09-15
Since rising energy costs have become a crucial factor for the economy of production processes, the optimization of energy efficiency is of essential importance for industrial enterprises. Enterprises establish energy saving programs, specific to their needs. The most important elements of these energy efficiency programs are energy savings, energy controlling, energy optimization, and energy management. This article highlights the industrial enterprise approach to establish sustainable energy management programs based on the above elements. Globally, if organizations follow this approach, they can significantly reduce the overall energy consumption and cost.
Financing Energy Efficient Homes
Energy Technology Data Exchange (ETDEWEB)
NONE
2007-07-01
Existing buildings require over 40% of the world's total final energy consumption, and account for 24% of world CO2 emissions (IEA, 2006). Much of this consumption could be avoided through improved efficiency of building energy systems (IEA, 2006) using current, commercially-viable technology. In most cases, these technologies make economic sense on a life-cycle cost analysis (IEA, 2006b). Moreover, to the extent that they reduce dependence on risk-prone fossil energy sources, energy efficient technologies also address concerns of energy security.
DEFF Research Database (Denmark)
Godsk, Mikkel
engaging educators in the design process and developing teaching and learning, it is a shift in educational practice that potentially requires a stakeholder analysis and ultimately a business model for the deployment. What is most important is to balance the institutional, educator, and student...... perspectives and to consider all these in conjunction in order to obtain a sustainable, efficient learning design. The approach to deploying learning design in terms of the concept of efficient learning design, the catalyst for educational development, i.e. the learning design model and how it is being used...
Dryden, IGC
2013-01-01
The Efficient Use of Energy, Second Edition is a compendium of papers discussing the efficiency with which energy is used in industry. The collection covers relevant topics in energy handling and describes the more important features of plant and equipment. The book is organized into six parts. Part I presents the various methods of heat production. The second part discusses the use of heat in industry and includes topics in furnace design, industrial heating, boiler plants, and water treatment. Part III deals with the production of mechanical and electrical energy. It tackles the principles o
Resilience and efficiency in transportation networks.
Ganin, Alexander A; Kitsak, Maksim; Marchese, Dayton; Keisler, Jeffrey M; Seager, Thomas; Linkov, Igor
2017-12-01
Urban transportation systems are vulnerable to congestion, accidents, weather, special events, and other costly delays. Whereas typical policy responses prioritize reduction of delays under normal conditions to improve the efficiency of urban road systems, analytic support for investments that improve resilience (defined as system recovery from additional disruptions) is still scarce. In this effort, we represent paved roads as a transportation network by mapping intersections to nodes and road segments between the intersections to links. We built road networks for 40 of the urban areas defined by the U.S. Census Bureau. We developed and calibrated a model to evaluate traffic delays using link loads. The loads may be regarded as traffic-based centrality measures, estimating the number of individuals using corresponding road segments. Efficiency was estimated as the average annual delay per peak-period auto commuter, and modeled results were found to be close to observed data, with the notable exception of New York City. Resilience was estimated as the change in efficiency resulting from roadway disruptions and was found to vary between cities, with increased delays due to a 5% random loss of road linkages ranging from 9.5% in Los Angeles to 56.0% in San Francisco. The results demonstrate that many urban road systems that operate inefficiently under normal conditions are nevertheless resilient to disruption, whereas some more efficient cities are more fragile. The implication is that resilience, not just efficiency, should be considered explicitly in roadway project selection and justify investment opportunities related to disaster and other disruptions.
Extraction Efficiency of Belonolaimus longicaudatus from Sandy Soil.
McSorley, R; Frederick, J J
1991-10-01
Numbers of Belonolaimus longicaudatus extracted from sandy soils (91-92% sand) by sieving and centrifugation were only 40-55% of those extracted by sieving and incubation on a Baermann tray. Residues normally discarded at each step of the sieving plus Baermann tray extraction procedure were examined for nematodes to obtain estimates of extraction efficiencies. For third-stage and fourth-stage juveniles, males, and females, estimates of extraction efficiency ranged from 60 to 65% in one experiment and 73 to 82% in another. Estimated extraction efficiencies for second-stage juveniles were lower (33% in one experiment, 67% in another) due to losses during sieving. When sterilized soil was seeded with known numbers of B. longicaudatus, 60% of second-stage juveniles and 68-76% of other stages were recovered. Most stages of B. longicaudatus could be extracted from these soils by sieving plus Baermann incubation with an efficiency of 60-70%.
Tally efficiency analysis for Monte Carlo Wielandt method
Energy Technology Data Exchange (ETDEWEB)
Shim, Hyung Jin, E-mail: shimhj@kaeri.re.k [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Chang Hyo [Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)
2009-11-15
The Monte Carlo Wielandt method has the potential to eliminate most of a variance bias because it can reduce the dominance ratio by properly controlling the estimated eigenvalue (k{sub e}). However, it requires increasingly more computation time to simulate additional fission neutrons as the estimated eigenvalue becomes closer to the effective multiplication factor (k{sub eff}). Therefore, its advantage over the conventional Monte Carlo (MC) power method in the calculation efficiency may not always be ensured. Its efficiency of the tally estimation needs to be assessed in terms of a figure of merit based on a real variance as a function of k{sub e}. In this paper, the real variance is estimated by using an inter-cycle correlation of the fission source distribution for the MC Wielandt calculations. Then, the tally efficiency of the MC Wielandt method is analyzed for a 2 x 2 fission matrix system and weakly coupled fissile array problems with different dominance ratios (DRs). It is shown that the tally efficiency of the MC Wielandt method depends strongly on k{sub e}, there is a k{sub e} value resulting in the best efficiency for a problem with a large DR, and the efficiency curve as a function of L, the average number of fission neutrons per history, follows a long tail after the best efficiency.
A logistic regression estimating function for spatial Gibbs point processes
DEFF Research Database (Denmark)
Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege
We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...
Using transformation algorithms to estimate (co)variance ...
African Journals Online (AJOL)
... to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of (co)variance components. Results from a simulation study indicate that (co)variance components can be estimated efficiently at a low cost on ...
Tail index and quantile estimation with very high frequency data
J. Daníelsson (Jón); C.G. de Vries (Casper)
1997-01-01
textabstractA precise estimation of the tail shape of forex returns is of critical importance for proper risk assessment. We improve upon the efficiency of conventional estimators that rely on a first order expansion of the tail shape, by using the second order expansion. Here we advocate a moments
The Study on Energy Efficiency in Africa
Wu, Jinduo
This paper is dedicated to explore the dynamic performance of energy efficiency in Africa, with panel data in country level, taking energy yield, power consumption, electricity transmission and distribution losses into account, the paper employ stochastic frontier mode,highlighting a dummy variable in energy output in terms of net imports of energy and power, which minify the deviation of estimated variables. The results show that returns of scale did not appear in energy and power industry in Africa, electricity transmission and distribution losses contribute most to GDP per unit of energy. In country level, Republic of Congo and Botswana suggest an obvious energy efficiency advantage. Energy efficiency in Mozambique and Democratic Republic of Congo are not very satisfying during the studying year
Energy-Efficiency in Optical Networks
DEFF Research Database (Denmark)
Saldaña Cercos, Silvia
This thesis expands the state-of-the-art on the complex problem of implementing energy efficient optical networks. The main contribution of this Ph.D. thesis is providing a holistic approach in a multi-layered manner where different tools are used to tackle the urgent need of both estimating...... with current traffic demands and this dissertation tackles the trade-off between energy efficiency and quality of service in terms of latency. Another important contribution of this thesis is the novel mixed integer linear programing (MILP) formulation for internet protocol (IP) over wavelength division...... with parallel optics and WDM systems is reported. These results show the trade-off between increased capacity and both power consumption and system performance. In conclusion, an energy-efficient set of tools has been provided covering different aspects of the telecommunication network resulting in a cohesive...
PRODUCT EFFICIENCY IN THE SPANISH AUTOMOBILE MARKET
Directory of Open Access Journals (Sweden)
González, Eduardo
2013-01-01
Full Text Available This paper evaluates product efficiency in the Spanish automobile market. We use non parametric frontier techniques in order to estimate product efficiency scores for each model. These scores reflect the minimum price for which each car could be sold, given the bundle of tangible features it offers in comparison to the best-buy models. Unlike previous research, we use discounted prices which have been adjusted by car dealerships to meet sale targets. Therefore, we interpret the efficiency scores as indicators of the value of the intangible features of the brand. The results show that Audi, Volvo, Volkswagen and Mercedes offer the greatest intangible value, since they are heavily overpriced in terms of price/product ratios. Conversely, Seat, Kia, Renault and Dacia are the brands that can be taken as referent in terms of price/product ratios.
Directory of Open Access Journals (Sweden)
D. Sümeyra Demirkıran
2014-03-01
Full Text Available Concept of age estimation plays an important role on both civil law and regulation of criminal behaviors. In forensic medicine, age estimation is practiced for individual requests as well for request of the court. In this study it is aimed to compile the methods of age estimation and to make recommendations for the solution of the problems encountered. In radiological method the epiphyseal lines of the bones and views of the teeth are used. In order to estimate the age by comparing bone radiographs; Greulich-Pyle Atlas (GPA, Tanner-Whitehouse Atlas (TWA and “Adli Tıpta Yaş Tayini (ATYT” books are used. Bone age is found to be 2 years older averagely than chronologic age, especially in puberty, according to the forensic age estimations described in the ATYT book. For the age estimation with teeth, Demirjian method is used. In time different methods are developed by modifying Demirjian method. However no accurate method was found. Histopathological studies are done on bone marrow cellularity and dermis cells. No correlation was found between histopathoogical findings and choronologic age. Important ethical and legal issues are brought with current age estimation methods especially in teenage period. Therefore it is required to prepare atlases of bone age compatible with our society by collecting the findings of the studies in Turkey. Another recommendation could be to pay attention to the courts of age raising trials of teenage women and give special emphasis on birth and population records
Smart Efficient Lightweight Facade
Martjanova, I.; Miraliyari, M.; Kakolyri, T.
2014-01-01
This "designers' manual" is made during the TIDO-course AR0533 Innovation & Sustainability. The purpose of the manual is to describe and demonstrate innovative materials for an efficient, lightweight and smartly working facade. We explain their current state and their technological progress so the
Web anonymization efficiency study
Sochor, Tomas
2017-11-01
The analysis of TOR, JonDo and CyberGhost efficiency (measured the as latency increase and transmission speed decrease) is presented in the paper. Results showed that all tools have relatively favorable latency increase (no more than 60% RTT increase). The transmission speed increase was much more significant (more than 60%), and even more for JonDo (above 90%).
Investment Project Efficiency Evaluation
Miljenko Crnjac; Dominika Crnjac
2006-01-01
Financial efficiency of investment project is being evaluated in this paper. It is showed that the net present value function is constant and that quota value is equal to c0 when i converges to the infinite. Optimal rates i are analyzed in certain cases and everything is illustrated through examples.
Wang, B.; Feng, L.; Shen, Y.
Inspired by the best querying performance of ViST among the rest of the approaches in the literature, and meanwhile to overcome its shortcomings, in this paper, we present another efficient and novel geometric sequence mechanism, which transforms XML documents and XPath queries into the
Indian Academy of Sciences (India)
Outline of the talk. Introduction. Computing connectivities between all pairs of vertices. All pairs shortest paths/distances. Optimal bipartite matching . – p.2/30 .... Efficient Algorithm. The time taken for this computation on any input should be bounded by a small polynomial in the input size. . – p.6/30 ...
DEFF Research Database (Denmark)
Gambalemoke, Mbalitini; Mukinzi, Itoka; Amundala, Drazo
2008-01-01
We investigated the efficiency of four trap types (pitfall, Sherman LFA, Victor snap and Museum Special snap traps) to capture shrews. This experiment was conducted in five inter-riverine forest blocks in the region of Kisangani. The total trapping effort was 6,300, 9,240, 5,280 and 5,460 trap...
Violino, Bob
2008-01-01
This article discusses the enterprise resource planning (ERP) system. Deploying an ERP system is one of the most extensive--and expensive--IT projects a college or university can undertake. The potential benefits of ERP are significant: a more smoothly running operation with efficiencies in virtually every area of administration, from automated…
Efficient Immutable Collections
Steindorfer, M.J.
2017-01-01
This thesis proposes novel and efficient data structures, suitable for immutable collection libraries, that carefully balance memory footprint and runtime performance of operations, and are aware of constraints and platform co-design challenges on the Java Virtual Machine (JVM). Collection data
van der Heijden, G.; Ruijter, E.; Orru, R.V.A.
2013-01-01
Multicomponent reactions (MCRs) are versatile syntheses for obtaining structurally diverse sets of complex scaffolds with high efficiency. As such, they can be attractive synthetic tools for the realization of diversity- and/or biology-oriented synthesis design strategies for focused libraries. In
Fuzzy efficiency without convexity
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Balezentis, Tomas
2014-01-01
approach builds directly upon the definition of Farrell's indexes of technical efficiency used in crisp FDH. Therefore we do not require the use of fuzzy programming techniques but only utilize ranking probabilities of intervals as well as a related definition of dominance between pairs of intervals. We...
ENERGY EFFICIENT LAUNDRY PROCESS
Energy Technology Data Exchange (ETDEWEB)
Tim Richter
2005-04-01
With the rising cost of energy and increased concerns for pollution and greenhouse gas emissions from power generation, increased focus is being put on energy efficiency. This study looks at several approaches to reducing energy consumption in clothes care appliances by considering the appliances and laundry chemistry as a system, rather than individually.
DEFF Research Database (Denmark)
Østergaard Madsen, Christian; Kræmmergaard, Pernille
2015-01-01
The Danish e-government strategy aims to increase the efficiency of public sector administration by making e-government channels mandatory for citizens by 2015. Although Danish citizens have adopted e-government channels to interact with public authorities, many also keep using traditional channe...
Microeconomics : Equilibrium and Efficiency
Ten Raa, T.
2013-01-01
Microeconomics: Equilibrium and Efficiency teaches how to apply microeconomic theory in an innovative, intuitive and concise way. Using real-world, empirical examples, this book not only covers the building blocks of the subject, but helps gain a broad understanding of microeconomic theory and
Chlorific efficiency of coal hydrogenation
Energy Technology Data Exchange (ETDEWEB)
Schappert, H.
1942-10-20
In studies on the calorific efficiency of coal hydrogenation, the efficiency for H/sub 2/ production was calculated to be 26%, the efficiency for hydrogenation was calculated to be 49%, and the efficiency of hydrogenation including H/sub 2/ production was 27.2%. The efficiency of hydrogenation plus hydrogen production was almost equal to the efficiency of hydrogen production alone, even though this was not expected because of the total energy calculated in the efficiency of hydrogenation proper. It was entirely possible, but did not affect computations, that the efficiency of one or the other components of hydrogenation process differed somewhat from 49%. The average efficiency for all cases was 49%. However, when hydrogen was not bought, but was produced--(efficiency of hydrogen production was 26%, not 100%-- then the total energy changed and the efficiency of hydrogen production and combination was not 26%, but 13%. This lower value explained the drop of hydrogenation efficiency to 27.2%.
Estimation method of multivariate exponential probabilities based on a simple coordinates transform
Olieman, N.J.; Putten, van B.
2010-01-01
A novel unbiased estimator for estimating the probability mass of a multivariate exponential distribution over a measurable set is introduced and is called the exponential simplex (ES) estimator. For any measurable set and given sample size, the statistical efficiency of the ES estimator is higher