Computationally efficient permutation-based confidence interval estimation for tail-area FDR
Directory of Open Access Journals (Sweden)
Joshua eMillstein
2013-09-01
Full Text Available Challenges of satisfying parametric assumptions in genomic settings with thousands or millions of tests have led investigators to combine powerful False Discovery Rate (FDR approaches with computationally expensive but exact permutation testing. We describe a computationally efficient permutation-based approach that includes a tractable estimator of the proportion of true null hypotheses, the variance of the log of tail-area FDR, and a confidence interval (CI estimator, which accounts for the number of permutations conducted and dependencies between tests. The CI estimator applies a binomial distribution and an overdispersion parameter to counts of positive tests. The approach is general with regards to the distribution of the test statistic, it performs favorably in comparison to other approaches, and reliable FDR estimates are demonstrated with as few as 10 permutations. An application of this approach to relate sleep patterns to gene expression patterns in mouse hypothalamus yielded a set of 11 transcripts associated with 24 hour REM sleep (FDR = .15 (.08, .26. Two of the corresponding genes, Sfrp1 and Sfrp4, are involved in wnt signaling and several others, Irf7, Ifit1, Iigp2, and Ifih1, have links to interferon signaling. These genes would have been overlooked had a typical a priori FDR threshold such as 0.05 or 0.1 been applied. The CI provides the flexibility for choosing a significance threshold based on tolerance for false discoveries and precision of the FDR estimate. That is, it frees the investigator to use a more data-driven approach to define significance, such as the minimum estimated FDR, an option that is especially useful for weak effects, often observed in studies of complex diseases.
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...
Interval estimates and their precision
Marek, Luboš; Vrabec, Michal
2015-06-01
A task very often met in in practice is computation of confidence interval bounds for the relative frequency within sampling without replacement. A typical situation includes preelection estimates and similar tasks. In other words, we build the confidence interval for the parameter value M in the parent population of size N on the basis of a random sample of size n. There are many ways to build this interval. We can use a normal or binomial approximation. More accurate values can be looked up in tables. We consider one more method, based on MS Excel calculations. In our paper we compare these different methods for specific values of M and we discuss when the considered methods are suitable. The aim of the article is not a publication of new theoretical methods. This article aims to show that there is a very simple way how to compute the confidence interval bounds without approximations, without tables and without other software costs.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
Interval Estimation of Seismic Hazard Parameters
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Achieving a Confidence Interval for Parameters Estimated by Simulation
Adam, Nabil R.
1983-01-01
This paper presents a procedure for determining the number of simulation observations required to achieve a preassigned confidence interval for means estimated by simulation. This procedure, which is simple to implement and efficient to use, is compared with two other methods for determining the required sample size in a simulation run. The empirical results show that this procedure gives good results in the precision of estimated means and in sample size requirement.
DEVELOPMENT MANAGEMENT TRANSFER PRICING BY APPLICATION OF THE INTERVAL ESTIMATES
Directory of Open Access Journals (Sweden)
Elena B. Shuvalova
2013-01-01
Full Text Available The article discusses the application of the method of interval estimation of conformity of the transaction price the market price. A comparative analysis of interval and point estimate. Identified the positive and negative effects of using interval estimation.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
Overconfidence in Interval Estimates: What Does Expertise Buy You?
McKenzie, Craig R. M.; Liersch, Michael J.; Yaniv, Ilan
2008-01-01
People's 90% subjective confidence intervals typically contain the true value about 50% of the time, indicating extreme overconfidence. Previous results have been mixed regarding whether experts are as overconfident as novices. Experiment 1 examined interval estimates from information technology (IT) professionals and UC San Diego (UCSD) students…
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Parametric change point estimation, testing and confidence interval ...
African Journals Online (AJOL)
In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...
An Improvement to Interval Estimation for Small Samples
Directory of Open Access Journals (Sweden)
SUN Hui-Ling
2017-02-01
Full Text Available Because it is difficult and complex to determine the probability distribution of small samples，it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although，the Bayes Bootstrap method has its own limitation，In this article an improvement is given to the Bayes Bootstrap method，This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally，by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.
Estimating the NIH efficient frontier.
Directory of Open Access Journals (Sweden)
Dimitrios Bisias
Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Directory of Open Access Journals (Sweden)
Chuanhua Wei
2015-07-01
Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.
Estimation models of variance components for farrowing interval in swine
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante Neto
2009-02-01
Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos
Econometric Analysis on Efficiency of Estimator
M Khoshnevisan; Kaymram, F.; Singh, Housila P.; Singh, Rajesh; Smarandache, Florentin
2003-01-01
This paper investigates the efficiency of an alternative to ratio estimator under the super population model with uncorrelated errors and a gamma-distributed auxiliary variable. Comparisons with usual ratio and unbiased estimators are also made.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It
Flexible and efficient estimating equations for variogram estimation
Sun, Ying
2018-01-11
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
The Sharpe ratio of estimated efficient portfolios
Kourtis, Apostolos
2016-01-01
Investors often adopt mean-variance efficient portfolios for achieving superior risk-adjusted returns. However, such portfolios are sensitive to estimation errors, which affect portfolio performance. To understand the impact of estimation errors, I develop simple and intuitive formulas of the squared Sharpe ratio that investors should expect from estimated efficient portfolios. The new formulas show that the expected squared Sharpe ratio is a function of the length of the available data, the ...
An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.
Capraro, Mary Margaret
This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…
Computing Efficiency for Decision Making Units with Negative and Interval Data
Directory of Open Access Journals (Sweden)
M. Piri
2016-03-01
Full Text Available Data Envelopment Analysis (DEA is a nonparametric method for identifying sources and estimating the mount of inefficiencies contained in inputs and outputs produced by Decision Making Units (DMUs. DEA requires that the data for all inputs and outputs should be known exactly, but under many qualifications, exact data are inadequate to model real-life situations. So these data may have different structures such as bounded data, interval data, and fuzzy data. Moreover, the main assumption in all DEA is that input and output values are positive, but we confront many cases that discount this condition producing negative data. The purpose of this paper is to compute efficiency for DMUs, which permits the presence of intervals which can take both negative and positive values.
M-Interval Orthogonal Polynomial Estimators with Applications
Jaroszewicz, Boguslaw Emanuel
In this dissertation, adaptive estimators of various statistical nonlinearities are constructed and evaluated. The estimators are based on classical orthogonal polynomials which allows an exact computation of convergence rates. The first part of the dissertation is devoted to the estimation of one- and multi-dimensional probability density functions. The most attractive computationally is the Legendre estimator, which corresponds to the mean square segmented polynomial approximation of a pdf. Exact bounds for two components of the estimation error--deterministic bias and random error--are derived for all the polynomial estimators. The bounds on the bias are functions of the "smoothness" of the estimated pdf as measured by the number of continuous derivatives the pdf possesses. Adaptively estimated the optimum number of orthonormal polynomials minimizes the total error. In the second part, the theory of polynomial estimators is applied to the estimation of derivatives of pdf and regression functions. The optimum detectors for small signals in nongaussian noise, as well as any kind of statistical filtering involving likelihood function, are based on the nonlinearity which is a ratio of the derivative of the pdf and the pdf itself. Several different polynomial estimators of this nonlinearity are developed and compared. The theory of estimation is then extended to the multivariable case. The partial derivative nonlinearity is used for detection of signals in dependent noise. When the dimensionality of the nonlinearity is very large, the transformed Hermite estimators are particularly useful. The estimators can be viewed as two-stage filters: the first stage is a pre -whitening filter optimum in gaussian noise and the second stage is a nonlinear filter, which improves performance in nongaussian noise. Filtering of this type can be applied to predictive coding, nonlinear identification and other estimation problems involving a conditional expected value. In the third
Confidence intervals for test information and relative efficiency
Oosterloo, Sebe J.
1984-01-01
In latent theory the measurement properties of a mental test can be expressed in the test information function. The relative merits of two tests for the same latent trait can be described by the relative efficiency function, i.e. the ratio of the test information functions. It is argued that these
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Directory of Open Access Journals (Sweden)
Mary Kathryn Abel
Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Notes on interval estimation of the generalized odds ratio under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2013-05-01
It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... of an effcient and an ineffcientestimator are compared graphically. The second part of the thesis concerns diffusions withfinite-activity jumps, observed over an increasing interval with terminal sampling time goingto infinity. Asymptotic distribution results are derived for consistent estimators of ageneral...
Directory of Open Access Journals (Sweden)
Özgür Bulut
2016-08-01
Full Text Available Postmortem interval arises one of the most important research topics in worldwide modern forensic science applications. In spite of utilizing morphological, biochemical, flow-cytometric, microbiological, entomological, anthropological, spectroscopic and main postmortem changes in postmortem interval estimation, it does not seem possible to get certain results by only one test or method. Because, there are many physical, chemical and biological processes affecting the parameters. Therefore, postmortem interval estimation needs development of previous methods and implementation of novel methods. In this regard, taphonomic methods need to be improved in postmortem interval estimation and regional factors and climate impact need to be determined by experimental studies. In particular, we are of the opinion that that more accurate estimation of postmortem interval will be achieved by determining regional factor involving postmortem period. This paper aims to evaluate the relationship between postmortem interval and accumulated degree days in respect of decomposition stages. Key Words: Forensic Taphonomy, Postmortem Interval, Forensic Anthropology
Efficient estimates of cochlear hearing loss parameters in individual listeners
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2013-01-01
) are presented and used to estimate the knee-point level and the compression ratio of the I/O function. A time-efficient paradigm based on the single-interval-up-down method (SIUD; Lecluyse and Meddis (2009)) was used. In contrast with previous studies, the present study used only on-frequency TMCs to derive...... to Jepsen and Dau (2011) IHL + OHL = HLT [dB], where HLT stands for total hearing loss. Hence having estimates of the total hearing loss and OHC loss, one can estimate the IHL. In the present study, results from forward masking experiments based on temporal masking curves (TMC; Nelson et al., 2001...
Two Efficient Twin ELM Methods With Prediction Interval.
Ning, Kefeng; Liu, Min; Dong, Mingyu; Wu, Cheng; Wu, ZhanSong
2015-09-01
In the operational optimization and scheduling problems of actual industrial processes, such as iron and steel, and microelectronics, the operational indices and process parameters usually need to be predicted. However, for some input and output variables of these prediction models, there may exist a lot of uncertainties coming from themselves, the measurement error, the rough representation, and so on. In such cases, constructing a prediction interval (PI) for the output of the corresponding prediction model is very necessary. In this paper, two twin extreme learning machine (TELM) models for constructing PIs are proposed. First, we propose a regularized asymmetric least squares extreme learning machine (RALS-ELM) method, in which different weights of its squared error loss function are set according to whether the error of the model output is positive or negative in order that the above error can be differentiated in the parameter learning process, and Tikhonov regularization is introduced to reduce overfitting. Then, we propose an asymmetric Bayesian extreme learning machine (AB-ELM) method based on the Bayesian framework with the asymmetric Gaussian distribution (AB-ELM), in which the weights of its likelihood function are determined as the same method in RALS-ELM, and the type II maximum likelihood algorithm is derived to learn the parameters of AB-ELM. Based on RALS-ELM and AB-ELM, we use a pair of weights following the reciprocal relationship to obtain two nonparallel regressors, including a lower-bound regressor and an upper-bound regressor, respectively, which can be used for calculating the PIs. Finally, some discussions are given, about how to adjust the weights adaptively to meet the desired PI, how to use the proposed TELMs for nonlinear quantile regression, and so on. Results of numerical comparison on data from one synthetic regression problem, three University of California Irvine benchmark regression problems, and two actual industrial regression
Asymptotically Distribution-Free (ADF) Interval Estimation of Coefficient Alpha
Maydeu-Olivares, Alberto; Coffman, Donna L.; Hartmann, Wolfgang M.
2007-01-01
The point estimate of sample coefficient alpha may provide a misleading impression of the reliability of the test score. Because sample coefficient alpha is consistently biased downward, it is more likely to yield a misleading impression of poor reliability. The magnitude of the bias is greatest precisely when the variability of sample alpha is…
Mapping of Estimations and Prediction Intervals Using Extreme Learning Machines
Leuenberger, Michael; Kanevski, Mikhail
2015-04-01
Due to the large amount and complexity of data available nowadays in environmental sciences, we face the need to apply more robust methodology allowing analyses and understanding of the phenomena under study. One particular but very important aspect of this understanding is the reliability of generated prediction models. From the data collection to the prediction map, several sources of error can occur and affect the final result. Theses sources are mainly identified as uncertainty in data (data noise), and uncertainty in the model. Their combination leads to the so-called prediction interval. Quantifying these two categories of uncertainty allows a finer understanding of phenomena under study and a better assessment of the prediction accuracy. The present research deals with a methodology combining a machine learning algorithm (ELM - Extreme Learning Machine) with a bootstrap-based procedure. Developed by G.-B. Huang et al. (2006), ELM is an artificial neural network following the structure of a multilayer perceptron (MLP) with one single hidden layer. Compared to classical MLP, ELM has the ability to learn faster without loss of accuracy, and need only one hyper-parameter to be fitted (that is the number of nodes in the hidden layer). The key steps of the proposed method are as following: sample from the original data a variety of subsets using bootstrapping; from these subsets, train and validate ELM models; and compute residuals. Then, the same procedure is performed a second time with only the squared training residuals. Finally, taking into account the two modeling levels allows developing the mean prediction map, the model uncertainty variance, and the data noise variance. The proposed approach is illustrated using geospatial data. References Efron B., and Tibshirani R. 1986, Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical accuracy, Statistical Science, vol. 1: 54-75. Huang G.-B., Zhu Q.-Y., and Siew C.-K. 2006
Optimizing lengths of confidence intervals: fourth-order efficiency in location models
Klaassen, C.; Venetiaan, S.
2010-01-01
Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation
Estimates by bootstrap interval for time series forecasts obtained by theta model
Directory of Open Access Journals (Sweden)
Daniel Steffen
2017-03-01
Full Text Available In this work, are developed an experimental computer program in Matlab language version 7.1 from the univariate method for time series forecasting called Theta, and implementation of resampling technique known as computer intensive "bootstrap" to estimate the prediction for the point forecast obtained by this method by confidence interval. To solve this problem built up an algorithm that uses Monte Carlo simulation to obtain the interval estimation for forecasts. The Theta model presented in this work was very efficient in M3 Makridakis competition, where tested 3003 series. It is based on the concept of modifying the local curvature of the time series obtained by a coefficient theta (Θ. In it's simplest approach the time series is decomposed into two lines theta representing terms of long term and short term. The prediction is made by combining the forecast obtained by fitting lines obtained with the theta decomposition. The results of Mape's error obtained for the estimates confirm the favorable results to the method of M3 competition being a good alternative for time series forecast.
Zhang, Li
With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman
Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems
DEFF Research Database (Denmark)
Georgiadis, Stylianos; Limnios, Nikolaos
2016-01-01
In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...
Interval Estimators of the Centre and Width of a Two-Dimensional Microstructure
Wimmer, G.; Karovič, K.
2009-01-01
In metrology it is used to estimate the stipulated quantity as a mean and its uncertainty. This procedure is legitimate when the evaluated data are symmetrically distributed and the distribution is (at least approximately) known. But there exist many evaluating treatments in which the evaluated values are non-symmetrically distributed. In this case it is mathematically correct to use an interval estimator for the stipulated (measured) quantity i.e. to evaluate the (1-α)-confidence interval for the (true) stipulated (measured) quantity. This (random) interval covers with probability 1-α the (true) stipulated quantity. In the paper are presented interval estimators for some parameters of two-dimensional structures.
Directory of Open Access Journals (Sweden)
Gelle Guillaume
2007-01-01
Full Text Available A new algorithm estimating channel impulse response (CIR length and noise variance for orthogonal frequency-division multiplexing (OFDM systems with adaptive guard interval (GI length is proposed. To estimate the CIR length and the noise variance, the different statistical characteristics of the additive noise and the mobile radio channels are exploited. This difference is due to the fact that the variance of the channel coefficients depends on the position within the CIR, whereas the noise variance of each estimated channel tap is equal. Moreover, the channel can vary rapidly, but its length changes more slowly than its coefficients. An auxiliary function is established to distinguish these characteristics. The CIR length and the noise variance are estimated by varying the parameters of this function. The proposed method provides reliable information of the estimated CIR length and the noise variance even at signal-to-noise ratio (SNR of 0 dB. This information can be applied to an OFDM system with adaptive GI length, where the length of the GI is adapted to the current length of the CIR. The length of the GI can therefore be optimized. Consequently, the spectral efficiency of the system is increased.
Directory of Open Access Journals (Sweden)
Van Duc Nguyen
2007-02-01
Full Text Available A new algorithm estimating channel impulse response (CIR length and noise variance for orthogonal frequency-division multiplexing (OFDM systems with adaptive guard interval (GI length is proposed. To estimate the CIR length and the noise variance, the different statistical characteristics of the additive noise and the mobile radio channels are exploited. This difference is due to the fact that the variance of the channel coefficients depends on the position within the CIR, whereas the noise variance of each estimated channel tap is equal. Moreover, the channel can vary rapidly, but its length changes more slowly than its coefficients. An auxiliary function is established to distinguish these characteristics. The CIR length and the noise variance are estimated by varying the parameters of this function. The proposed method provides reliable information of the estimated CIR length and the noise variance even at signal-to-noise ratio (SNR of 0 dB. This information can be applied to an OFDM system with adaptive GI length, where the length of the GI is adapted to the current length of the CIR. The length of the GI can therefore be optimized. Consequently, the spectral efficiency of the system is increased.
Directory of Open Access Journals (Sweden)
Alessandro Barbiero
2014-01-01
Full Text Available In many statistical applications, it is often necessary to obtain an interval estimate for an unknown proportion or probability or, more generally, for a parameter whose natural space is the unit interval. The customary approximate two-sided confidence interval for such a parameter, based on some version of the central limit theorem, is known to be unsatisfactory when its true value is close to zero or one or when the sample size is small. A possible way to tackle this issue is the transformation of the data through a proper function that is able to make the approximation to the normal distribution less coarse. In this paper, we study the application of several of these transformations to the context of the estimation of the reliability parameter for stress-strength models, with a special focus on Poisson distribution. From this work, some practical hints emerge on which transformation may more efficiently improve standard confidence intervals in which scenarios.
Efficient bootstrap estimates for tail statistics
Breivik, Øyvind; Aarnes, Ole Johan
2017-03-01
Bootstrap resamples can be used to investigate the tail of empirical distributions as well as return value estimates from the extremal behaviour of the sample. Specifically, the confidence intervals on return value estimates or bounds on in-sample tail statistics can be obtained using bootstrap techniques. However, non-parametric bootstrapping from the entire sample is expensive. It is shown here that it suffices to bootstrap from a small subset consisting of the highest entries in the sequence to make estimates that are essentially identical to bootstraps from the entire sample. Similarly, bootstrap estimates of confidence intervals of threshold return estimates are found to be well approximated by using a subset consisting of the highest entries. This has practical consequences in fields such as meteorology, oceanography and hydrology where return values are calculated from very large gridded model integrations spanning decades at high temporal resolution or from large ensembles of independent and identically distributed model fields. In such cases the computational savings are substantial.
An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting
Directory of Open Access Journals (Sweden)
Daiho Uhm
2017-11-01
Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.
Lui, Kung-Jong
2007-07-20
In a randomized clinical trial (RCT), we often encounter non-compliance with the treatment protocol for a subset of patients. The intention-to-treat (ITT) analysis is probably the most commonly used method in a RCT with non-compliance. However, the ITT analysis estimates 'the programmatic effectiveness' rather than 'the biological efficacy'. In this paper, we focus attention on the latter index and consider use of the risk difference (RD) to measure the effect of a treatment. Based on a simple additive risk model proposed elsewhere, we develop four asymptotic interval estimators of the RD for repeated binary measurements in a RCT with non-compliance. We apply Monte Carlo simulation to evaluate and compare the finite-sample performance of these interval estimators in a variety of situations. We find that all interval estimators considered here can perform well with respect to the coverage probability. We further find that the interval estimator using a tanh(-1)(x) transformation is probably more precise than the others, while the interval estimator derived from a randomization-based approach may cause a slight loss of precision. When the number of patients per treatment is large and the probability of compliance to an assigned treatment is high, we find that all interval estimators discussed here are essentially equivalent. Finally, we illustrate use of these interval estimators with data simulated from a trial of using macrophage colony-stimulating factor to reduce febrile neutropenia incidence in acute myeloid leukaemia patients. (c) 2006 John Wiley & Sons, Ltd.
CSIR Research Space (South Africa)
Kirton, A
2010-08-01
Full Text Available intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form... for allometric relationships - and identifies the information that needs to be provided with the allometric equation if it is to be used with confidence. Correct estimation of tree biomass with known error is very important when trees are being planted...
MILITARY MISSION COMBAT EFFICIENCY ESTIMATION SYSTEM
Directory of Open Access Journals (Sweden)
Ighoyota B. AJENAGHUGHRURE
2017-04-01
Full Text Available Military infantry recruits, although trained, lacks experience in real-time combat operations, despite the combat simulations training. Therefore, the choice of including them in military operations is a thorough and careful process. This has left top military commanders with the tough task of deciding, the best blend of inexperienced and experienced infantry soldiers, for any military operation, based on available information on enemy strength and capability. This research project delves into the design of a mission combat efficiency estimator (MCEE. It is a decision support system that aids top military commanders in estimating the best combination of soldiers suitable for different military operations, based on available information on enemy’s combat experience. Hence, its advantages consist of reducing casualties and other risks that compromises the entire operation overall success, and also boosting the morals of soldiers in an operation, with such information as an estimation of combat efficiency of their enemies. The system was developed using Microsoft Asp.Net and Sql server backend. A case study test conducted with the MECEE system, reveals clearly that the MECEE system is an efficient tool for military mission planning in terms of team selection. Hence, when the MECEE system is fully deployed it will aid military commanders in the task of decision making on team members’ combination for any given operation based on enemy personnel information that is well known beforehand. Further work on the MECEE will be undertaken to explore fire power types and impact in mission combat efficiency estimation.
Inferring uncertainty from interval estimates: Effects of alpha level and numeracy
Directory of Open Access Journals (Sweden)
Luke F. Rinne
2013-05-01
Full Text Available Interval estimates are commonly used to descriptively communicate the degree of uncertainty in numerical values. Conventionally, low alpha levels (e.g., .05 ensure a high probability of capturing the target value between interval endpoints. Here, we test whether alpha levels and individual differences in numeracy influence distributional inferences. In the reported experiment, participants received prediction intervals for fictitious towns' annual rainfall totals (assuming approximately normal distributions. Then, participants estimated probabilities that future totals would be captured within varying margins about the mean, indicating the approximate shapes of their inferred probability distributions. Results showed that low alpha levels (vs. moderate levels; e.g., .25 more frequently led to inferences of over-dispersed approximately normal distributions or approximately uniform distributions, reducing estimate accuracy. Highly numerate participants made more accurate estimates overall, but were more prone to inferring approximately uniform distributions. These findings have important implications for presenting interval estimates to various audiences.
Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D
2014-08-01
Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland
Notes on interval estimation of the gamma correlation under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2012-07-01
We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Display advertising: Estimating conversion probability efficiently
Safari, Abdollah; Altman, Rachel MacKay; Loughin, Thomas M.
2017-01-01
The goal of online display advertising is to entice users to "convert" (i.e., take a pre-defined action such as making a purchase) after clicking on the ad. An important measure of the value of an ad is the probability of conversion. The focus of this paper is the development of a computationally efficient, accurate, and precise estimator of conversion probability. The challenges associated with this estimation problem are the delays in observing conversions and the size of the data set (both...
Obtaining appropriate interval estimates for age when multiple indicators are used
DEFF Research Database (Denmark)
Fieuws, Steffen; Willems, Guy; Larsen, Sara Tangmose
2016-01-01
indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate...
Interval Estimation for True Raw and Scale Scores under the Binomial Error Model
Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.
2006-01-01
Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…
Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning
1993-01-01
Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...
Asymptotically optimal estimation of smooth functionals for interval censoring, case 2
Groeneboom, P.; Geskus, R.
1999-01-01
For a version of the interval censoring model, case 2, in which the observation intervals are allowed to be arbitrarily small, we consider estimation of functionals that are differentiable along Hellinger differentiable paths. The asymptotic information lower bound for such functionals can be
Interval estimation of the mass fractal dimension for anisotropic sampling percolation clusters
Moskalev, P. V.; Grebennikov, K. V.; Shitov, V. V.
2011-01-01
This report focuses on the dependencies for the center and radius of the confidence interval that arise when estimating the mass fractal dimensions of anisotropic sampling clusters in the site percolation model.
Interval estimation of the mass fractal dimension for isotropic sampling percolation clusters
Moskalev, P. V.; Grebennikov, K. V.; Shitov, V. V.
2011-01-01
This report focuses on the dependencies for the center and radius of the confidence interval that arise when estimating the mass fractal dimensions of isotropic sampling clusters in the site percolation model.
Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng
2013-07-01
Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.
Estimating the technical efficiency of Cutflower farms
Directory of Open Access Journals (Sweden)
Kristine Joyce P. Betonio
2016-12-01
Full Text Available This study sought to estimate the technical efficiency of cutflower farms and determine the sources of inefficiency among the farmers. In order to do so, the study had two phases: Phase 1 measured the technical efficiency scores of cutflower farms using data envelopment analysis (DEA. Phase 2 determined the causes of technical inefficiency using Tobit regression analysis. A total of 120 cutflower farms located in in Brgy. Kapatagan, Digos City, Philippines was considered as the decision-making units (DMUs of the study. Only two varieties were considered in the analysis because the 120 farmers have only planted chrysanthemum (Dendranthema grandiflora and baby’s breath (Gypsophila paniculata. Results revealed that there are four farms that are fully-efficient as they exhibited 1.00 technical efficiency scores in both CRS and VRS assumptions: Farm 95, Farm 118, Farm 119 and Farm 120. Of the four, Farm 120 is benchmarked the most, with 82 peers. Tobit model estimation revealed five significant determinants (and considered as sources of technical inefficiency of cutflower farms of Brgy. Kapatagan, Digos City: years of experience in farming, number of relevant seminars and trainings, distance of farm to central market (bagsakan, membership to cooperative, and access to credit.
Goff, M L; Win, B H
1997-11-01
The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.
Directory of Open Access Journals (Sweden)
Bahman Tarvirdizade
2014-01-01
Full Text Available We consider the estimation of stress-strength reliability based on lower record values when X and Y are independently but not identically inverse Rayleigh distributed random variables. The maximum likelihood, Bayes, and empirical Bayes estimators of R are obtained and their properties are studied. Confidence intervals, exact and approximate, as well as the Bayesian credible sets for R are obtained. A real example is presented in order to illustrate the inferences discussed in the previous sections. A simulation study is conducted to investigate and compare the performance of the intervals presented in this paper and some bootstrap intervals.
Evolution of heterogeneity (I2) estimates and their 95% confidence intervals in large meta-analyses
DEFF Research Database (Denmark)
Thorlund, Kristian; Imberger, Georgina; Johnston, Bradley C
2012-01-01
by sampling error. Recent studies have raised concerns about the reliability of I(2) estimates, due to their dependence on the precision of included trials and time-dependent biases. Authors have also advocated use of 95% confidence intervals (CIs) to express the uncertainty associated with I(2) estimates...
CSIR Research Space (South Africa)
Nickless, A
2010-09-01
Full Text Available . These relationships are referred to as allometric equations. In science it is important to quantify the error associated with an estimate in order to determine the reliability of the estimate. Therefore, prediction intervals or standard errors are usually quoted...
Nonparametric estimation in an "illness-death" model when all transition times are interval censored
DEFF Research Database (Denmark)
Frydman, Halina; Gerds, Thomas; Grøn, Randi
2013-01-01
veneers. Using the self-consistency algorithm we obtain the maximum likelihood estimators of the cumulative incidences of the times to events 1 and 2 and of the intensity of the 1 → 2 transition. This work generalizes previous results on the estimation in an "illness-death" model from interval censored...
[Reflection of estimating postmortem interval in forensic entomology and the Daubert standard].
Xie, Dan; Peng, Yu-Long; Guo, Ya-Dong; Cai, Ji-Feng
2013-08-01
Estimating postmortem interval (PMI) is always the emphasis and difficulty in forensic practice. Forensic entomology plays a significant indispensable role. Recently, the theories and technologies of forensic entomology are increasingly rich. But many problems remain in the research and practice. With proposing the Daubert standard, the reliability and accuracy of estimation PMI by forensic entomology need more demands. This review summarizes the application of the Daubert standard in several aspects of ecology, quantitative genetics, population genetics, molecular biology, and microbiology in the practice of forensic entomology. It builds a bridge for basic research and forensic practice to provide higher accuracy for estimating postmortem interval by forensic entomology.
Efficient volumetric estimation from plenoptic data
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
Is the reduction of birth intervals an efficient reproductive strategy in traditional Morocco?
Crognier, E
1998-01-01
Birth interval lengths are analysed from reproductive life histories of 517 Berber peasant women of the region of Marrakesh (Southern Morocco), whose fertility developed in a full traditional context. The high mortality rates associated with short birth intervals indicate that a rapid succession of births is detrimental to the progeny. The reproductive efficiency of the traditional propensity to a large family size is therefore examined by means of two different evaluations of reproductive success: the 'absolute' reproductive success (the absolute number of offspring surviving to maturity) and the 'relative' reproductive success (the proportion of live born surviving to maturity). The first shows that close pregnancies increase the fertility rate to such an extent that the associate higher number of deaths is more than compensated for, so that the women practising short birth intervals produce more surviving offspring than the others by the end of their reproductive life. The second shows that the probability of survival is directly associated with birth interval length, the efficiency of the reproductive process being therefore greater as birth intervals grow. It is suggested that these two behaviours are not contradictory, and that they represent two successive steps of the same reproductive adjustment to evolving environmental conditions.
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...
2014-01-01
The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ≥ 100. In addition, the optimal α-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728
Hoshi, Kiyoshi; Burges, Stephen J.
1981-10-01
An approximation estimation technique for computing the derivative of a standard gamma quantile, w, with respect to the distribution shape parameter, b necessary for estimating the sampling variance of a specified quantile, is developed. The modified Wilson-Hilferty transformation parameters given by Kirby are approximated with fifth-order polynomials in skew coefficient, γ, facilitating direct estimation of ∂ω/∂β for continuous quantiles. Results are compared with more precise approximations of Harter's exact quantiles made by Bobée. The method presented provides excellent approximations corresponding to the probability range 0.001 ⩽ p ⩽ 0.999 for 0.6 ⩽ γ ⩽ 5 and satisfactory estimates of ∂ω/∂β for quantiles corresponding to 1 in 10- to 1 in 1000-yr. events for smaller γ.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Directory of Open Access Journals (Sweden)
Fabiola Duarte
2017-08-01
Full Text Available The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A, visual (V or audiovisual (AV time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement.
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Jacqmin-Gadda, Hélène; Blanche, Paul; Chary, Emilie; Touraine, Célia; Dartigues, Jean-François
2016-12-01
Semicompeting risks and interval censoring are frequent in medical studies, for instance when a disease may be diagnosed only at times of visit and disease onset is in competition with death. To evaluate the ability of markers to predict disease onset in this context, estimators of discrimination measures must account for these two issues. In recent years, methods for estimating the time-dependent receiver operating characteristic curve and the associated area under the ROC curve have been extended to account for right censored data and competing risks. In this paper, we show how an approximation allows to use the inverse probability of censoring weighting estimator for semicompeting events with interval censored data. Then, using an illness-death model, we propose two model-based estimators allowing to rigorously handle these issues. The first estimator is fully model based whereas the second one only uses the model to impute missing observations due to censoring. A simulation study shows that the bias for inverse probability of censoring weighting remains modest and may be less than the one of the two parametric estimators when the model is misspecified. We finally recommend the nonparametric inverse probability of censoring weighting estimator as main analysis and the imputation estimator based on the illness-death model as sensitivity analysis. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
David Shilane
2013-01-01
Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.
Kaszynski, Richard H; Nishiumi, Shin; Azuma, Takeshi; Yoshida, Masaru; Kondo, Takeshi; Takahashi, Motonori; Asano, Migiwa; Ueno, Yasuhiro
2016-05-01
While the molecular mechanisms underlying postmortem change have been exhaustively investigated, the establishment of an objective and reliable means for estimating postmortem interval (PMI) remains an elusive feat. In the present study, we exploit low molecular weight metabolites to estimate postmortem interval in mice. After sacrifice, serum and muscle samples were procured from C57BL/6J mice (n = 52) at seven predetermined postmortem intervals (0, 1, 3, 6, 12, 24, and 48 h). After extraction and isolation, low molecular weight metabolites were measured via gas chromatography/mass spectrometry (GC/MS) and examined via semi-quantification studies. Then, PMI prediction models were generated for each of the 175 and 163 metabolites identified in muscle and serum, respectively, using a non-linear least squares curve fitting program. A PMI estimation panel for muscle and serum was then erected which consisted of 17 (9.7%) and 14 (8.5%) of the best PMI biomarkers identified in muscle and serum profiles demonstrating statistically significant correlations between metabolite quantity and PMI. Using a single-blinded assessment, we carried out validation studies on the PMI estimation panels. Mean ± standard deviation for accuracy of muscle and serum PMI prediction panels was -0.27 ± 2.88 and -0.89 ± 2.31 h, respectively. Ultimately, these studies elucidate the utility of metabolomic profiling in PMI estimation and pave the path toward biochemical profiling studies involving human samples.
Efficient estimation of smooth distributions from coarsely grouped data.
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-07-15
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most applications. The method is based on the composite link model, with a penalty added to ensure the smoothness of the target distribution. Estimates are obtained by maximizing a penalized likelihood. This maximization is performed efficiently by a version of the iteratively reweighted least-squares algorithm. Optimal values of the smoothing parameter are chosen by minimizing Akaike's Information Criterion. We demonstrate the performance of this method in a simulation study and provide several examples that illustrate the approach. Wide, open-ended intervals can be handled properly. The method can be extended to the estimation of rates when both the event counts and the exposures to risk are grouped. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Sartain-Iverson, Autumn R.; Hart, Kristen M.; Fujisaki, Ikuko; Cherkiss, Michael S.; Pollock, Clayton; Lundgren, Ian; Hillis-Starr, Zandy
2016-01-01
Hawksbill sea turtles (Eretmochelys imbricata) are circumtropically distributed and listed as Critically Endangered by the IUCN (Meylan & Donnelly 1999; NMFS & USFWS 1993). To aid in population recovery and protection, the Hawksbill Recovery Plan identified the need to determine demographic information for hawksbills, such as distribution, abundance, seasonal movements, foraging areas (sections 121 and 2211), growth rates, and survivorship (section 2213, NMFS & USFWS 1993). Mark-recapture analyses are helpful in estimating demographic parameters and have been used for hawksbills throughout the Caribbean (e.g., Richardson et al. 1999; Velez-Zuazo et al. 2008); integral to these studies are recaptures at the nesting site as well as remigration interval estimates (Hays 2000). Estimates of remigration intervals (the duration between nesting seasons) are critical to marine turtle population estimates and measures of nesting success (Hays 2000; Richardson et al. 1999). Although hawksbills in the Caribbean generally show natal philopatry and nesting-site fidelity (Bass et al. 1996; Bowen et al. 2007), exceptions to this have been observed for hawksbills and other marine turtles (Bowen & Karl 2007; Diamond 1976; Esteban et al. 2015; Hart et al. 2013). This flexibility in choosing a nesting beach could therefore affect the apparent remigration interval and subsequently, region-wide population counts.
Validation of temperature methods for the estimation of pre-appearance interval in carrion insects.
Matuszewski, Szymon; Mądra-Bielewicz, Anna
2016-03-01
The pre-appearance interval (PAI) is an interval preceding appearance of an insect taxon on a cadaver. It decreases with an increase in temperature in several forensically-relevant insects. Therefore, forensic entomologists developed temperature methods for the estimation of PAI. In the current study these methods were tested in the case of adult and larval Necrodes littoralis (Coleoptera: Silphidae), adult and larval Creophilus maxillosus (Coleoptera: Staphylinidae), adult Necrobia rufipes (Coleoptera: Cleridae), adult Saprinus semistriatus (Coleoptera: Histeridae) and adult Stearibia nigriceps (Diptera: Piophilidae). Moreover, factors affecting accuracy of estimation and techniques for the approximation and correction of predictor temperature were studied using results of a multi-year pig carcass study. It was demonstrated that temperature methods outperform conventional methods. The accuracy of estimation was strongly related to the quality of the temperature model for PAI and the quality of temperature data used for the estimation. Models for larval stage performed better than models for adult stage. Mean temperature for the average seasonal PAI was a good initial approximation of predictor temperature. Moreover, iterative estimation of PAI was found to effectively correct predictor temperature, although some pitfalls were identified in this respect. Implications for the estimation of PAI are discussed.
Ranking DMUs by Comparing DEA Cross-Efficiency Intervals Using Entropy Measures
Directory of Open Access Journals (Sweden)
Tim Lu
2016-12-01
Full Text Available Cross-efficiency evaluation, an extension of data envelopment analysis (DEA, can eliminate unrealistic weighing schemes and provide a ranking for decision making units (DMUs. In the literature, the determination of input and output weights uniquely receives more attentions. However, the problem of choosing the aggressive (minimal or benevolent (maximal formulation for decision-making might still remain. In this paper, we develop a procedure to perform cross-efficiency evaluation without the need to make any specific choice of DEA weights. The proposed procedure takes into account the aggressive and benevolent formulations at the same time, and the choice of DEA weights can then be avoided. Consequently, a number of cross-efficiency intervals is obtained for each DMU. The entropy, which is based on information theory, is an effective tool to measure the uncertainty. We then utilize the entropy to construct a numerical index for DMUs with cross-efficiency intervals. A mathematical program is proposed to find the optimal entropy values of DMUs for comparison. With the derived entropy value, we can rank DMUs accordingly. Two examples are illustrated to show the effectiveness of the idea proposed in this paper.
A New Formula for Estimating the True QT Interval in Left Bundle Branch Block.
Wang, Binhao; Zhang, L I; Cong, Peixin; Chu, Huimin; Liu, Ying; Liu, Jinqiu; Surkis, William; Xia, Yunlong
2017-06-01
QT prolongation is an independent risk factor for cardiac mortality. Left bundle branch block (LBBB) is more common in patients as they age. Widening of the QRS in LBBB causes false QT prolongation and thus makes true QT assessment difficult. We aimed to develop a simple formula to achieve a good estimate of the QT interval in the presence of LBBB. To determine the effect of QRS duration on the QT interval, QRS and QT were measured in sinus rhythm and during right ventricular apical pacing in 62 patients (age 55 ± 11 years, 60% male) undergoing electrophysiology studies. A QT formula for LBBB (QT-LBBB) was derived based on the effect of increased QRSLBBB on QTLBBB . The predictive accuracy of the QT-LBBB formula was then tested in 22 patients (age 66 ± 13 years, 64% male) with intermittent LBBB with comparisons to prior QT formulae and JT index. On average, the net increase in QRSLBBB constituted 92% of the net increase in QTLBBB . A new formula, QT-LBBB = QTLBBB - (0.86 * QRSLBBB - 71), which takes the net increase in QRSLBBB into account, best predicted the QT interval with heart rate corrected QTc in the test set of LBBB ECGs when compared to the baseline value and prior formulae. The QT-LBBB formula developed in this study best estimates the true QT interval in the presence of LBBB. It is simple and therefore can be easily utilized in clinical practice. © 2017 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Ascaso Carlos
2010-04-01
Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Faris, A M; Wang, H-H; Tarone, A M; Grant, W E
2016-05-31
Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Profiling of RNA degradation for estimation of post mortem [corrected] interval.
Directory of Open Access Journals (Sweden)
Fernanda Sampaio-Silva
Full Text Available An estimation of the post mortem interval (PMI is frequently touted as the Holy Grail of forensic pathology. During the first hours after death, PMI estimation is dependent on the rate of physical observable modifications including algor, rigor and livor mortis. However, these assessment methods are still largely unreliable and inaccurate. Alternatively, RNA has been put forward as a valuable tool in forensic pathology, namely to identify body fluids, estimate the age of biological stains and to study the mechanism of death. Nevertheless, the attempts to find correlation between RNA degradation and PMI have been unsuccessful. The aim of this study was to characterize the RNA degradation in different post mortem tissues in order to develop a mathematical model that can be used as coadjuvant method for a more accurate PMI determination. For this purpose, we performed an eleven-hour kinetic analysis of total extracted RNA from murine's visceral and muscle tissues. The degradation profile of total RNA and the expression levels of several reference genes were analyzed by quantitative real-time PCR. A quantitative analysis of normalized transcript levels on the former tissues allowed the identification of four quadriceps muscle genes (Actb, Gapdh, Ppia and Srp72 that were found to significantly correlate with PMI. These results allowed us to develop a mathematical model with predictive value for estimation of the PMI (confidence interval of ±51 minutes at 95% that can become an important complementary tool for traditional methods.
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Goldfeld, K S
2014-03-30
Cost-effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost-effectiveness using observational study data that is not fully observed. The method-twice-weighted multiple interval estimation of a marginal structural model-was developed in order to analyze the cost-effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost-effectiveness. Copyright © 2013 John Wiley & Sons, Ltd.
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Use of Megaselia scalaris (Diptera: Phoridae) for post-mortem interval estimation indoors.
Reibe, Saskia; Madea, Burkhard
2010-02-01
In forensic entomology, the determination of a minimum post-mortem interval often relies on the determination of the age of blow flies, since they are generally among the first colonisers of a corpse. In indoor cases, the blow flies might be delayed in arriving at the corpse. If the windows are closed, the attracting odour is confined and does not reach the flies, so that it takes longer for them to find and access the corpse. If blow flies are delayed or are unable to reach a corpse lying inside a room, much smaller flies (Phoridae) can enter and deposit their offspring. We present three indoor-case scenarios in which age determination of Megaselia scalaris gave much more accurate estimates of the minimum post-mortem interval than from larvae of Calliphoridae. In all cases, the estimated age of the blow fly larvae was between 10 and 20 days too short compared to the actual PMI. Estimation of the PMI using developmental times of Phoridae can be a good alternative to the determination of blow fly larval age, since Phoridae are found inside apparently enclosed environments (sealed plastic bags or rooms with closed doors and windows) and also at temperatures at which blow flies are inactive.
How efficient is estimation with missing data?
DEFF Research Database (Denmark)
Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai
2011-01-01
In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data percent...
Damann, Franklin E; Williams, Daniel E; Layton, Alice C
2015-07-01
Bacteria are taphonomic agents of human decomposition, potentially useful for estimating postmortem interval (PMI) in late-stage decomposition. Bone samples from 12 individuals and three soil samples were analyzed to assess the effects of decomposition and advancing time on bacterial communities. Results indicated that partially skeletonized remains maintained a presence of bacteria associated with the human gut, whereas bacterial composition of dry skeletal remains maintained a community profile similar to soil communities. Variation in the UniFrac distances was significantly greater between groups than within groups (p decomposition stages. The oligotrophic environment of bone relative to soft tissue and the physical protection of organic substrates may preclude bacterial blooms during the first years of skeletonization. Therefore, community membership (unweighted) may be better for estimating PMI from skeletonized remains than community structure (weighted). © 2015 American Academy of Forensic Sciences.
A Novel Approach for Estimating the Recurrence Intervals of Channel-Forming Discharges
Directory of Open Access Journals (Sweden)
Andy Ward
2016-06-01
Full Text Available Channel-forming discharges typically are associated with recurrence intervals less than five years and usually less than two years. However, the actual frequency of occurrence of these discharges is often several times more frequent than the statistical expectation. This result was confirmed by using the Log-Pearson Type 3 statistical method to analyze measured annual series of instantaneous peaks and peak daily means for 150 catchments in six states in the North Central Region of the United States. Discharge records ranged from 39 to 102 years and catchment sizes ranged from 29 to 6475 km2. For each state, mean values of the ratio of the calculated to the expected occurrences exceeded 1.0, for recurrence intervals from two years to 100 years with R-squared values varying from 0.64 to 0.97, respectively. However, catchment-by-catchment variability was too large for the relationships for each state to be useful. We propose a method, called Full Daily Distribution (FDD, which used all of the daily values for the available period of records. The approach provided ratios of calculated to expected occurrences that were approximately 1.0. For recurrence intervals less than five years, the FDD calculated discharges were much greater than those obtained by using the Log-Pearson Type 3 approach with annual series of instantaneous peaks or peak daily means. The method can also calculate discharges for recurrence intervals less than one year. The study indicates a need to enhance the Log-Pearson Type 3 method to provide better estimates of channel-forming discharges and that the proposed FDD could be a useful tool to this purpose.
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Directory of Open Access Journals (Sweden)
Brooke E. Starkoff
2014-07-01
Full Text Available Vigorous aerobic exercise may improve aerobic capacity (VO2max and cardiometabolic profiles in adolescents with obesity, independent of changes to weight. Our aim was to assess changes in estimated VO2max in obese adolescents following a 6-week exercise program of varying intensities. Adolescents with obesity were recruited from an American mid-west children’s hospital and randomized into moderate exercise (MOD or high intensity interval exercise (HIIE groups for a 6-week exercise intervention, consisting of cycle ergometry for 40 minutes, 3 days per week. Heart rate was measured every two minutes during each exercise session. Estimated VO2max measured via Åstrand cycle test, body composition, and physical activity (PA enjoyment evaluated via questionnaire were assessed pre/post-intervention. Twenty-seven adolescents (age 14.7±1.5; 17 female, 10 male completed the intervention. Estimated VO2max increased only in the HIIE group (20.0±5.7 to 22.7±6.5 ml/kg/min, p=0.015. The HIIE group also demonstrated increased PA enjoyment, which was correlated with average heart rate achieved during the intervention (r=0.55; p=0.043. Six weeks of HIIE elicited improvements to estimated VO2max in adolescents with obesity. Furthermore, those exercising at higher heart rates demonstrated greater PA enjoyment, implicating enjoyment as an important determinant of VO2max, specifically following higher intensity activities.
An Efficient Nonlinear Filter for Spacecraft Attitude Estimation
Directory of Open Access Journals (Sweden)
Bing Liu
2014-01-01
Full Text Available Increasing the computational efficiency of attitude estimation is a critical problem related to modern spacecraft, especially for those with limited computing resources. In this paper, a computationally efficient nonlinear attitude estimation strategy based on the vector observations is proposed. The Rodrigues parameter is chosen as the local error attitude parameter, to maintain the normalization constraint for the quaternion in the global estimator. The proposed attitude estimator is performed in four stages. First, the local attitude estimation error system is described by a polytopic linear model. Then the local error attitude estimator is designed with constant coefficients based on the robust H2 filtering algorithm. Subsequently, the attitude predictions and the local error attitude estimations are calculated by a gyro based model and the local error attitude estimator. Finally, the attitude estimations are updated by the predicted attitude with the local error attitude estimations. Since the local error attitude estimator is with constant coefficients, it does not need to calculate the matrix inversion for the filter gain matrix or update the Jacobian matrixes online to obtain the local error attitude estimations. As a result, the computational complexity of the proposed attitude estimator reduces significantly. Simulation results demonstrate the efficiency of the proposed attitude estimation strategy.
Efficient estimation for high similarities using odd sketches
DEFF Research Database (Denmark)
Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang
2014-01-01
. This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... comparison. We present a theoretical analysis of the quality of estimation to guarantee the reliability of Odd Sketch-based estimators. Our experiments confirm this efficiency, and demonstrate the efficiency of Odd Sketches in comparison with $b$-bit minwise hashing schemes on association rule learning...
HMGB1: A new marker for estimation of the postmortem interval
KIKUCHI, KIYOSHI; KAWAHARA, KO-ICHI; BISWAS, KAMAL KRISHNA; ITO, TAKASHI; TANCHAROEN, SALUNYA; SHIOMI, NAOTO; KODA, YOSHIRO; MATSUDA, FUMIYO; MORIMOTO, YOKO; OYAMA, YOKO; TAKENOUCHI, KAZUNORI; MIURA, NAOKI; ARIMURA, NOBORU; NAWA, YUKO; ARIMURA, SHINICHIRO; JIE, MENG XIAO; SHRESTHA, BINITA; IWATA, MASAHIRO; MERA, KENTARO; SAMESHIMA, HISAYO; OHNO, YOSHIKO; MAENOSONO, RYUICHI; TAJIMA, YUTAKA; UCHIKADO, HISAAKI; KURAMOTO, TERUKAZU; NAKAYAMA, KENJI; SHIGEMORI, MINORU; YOSHIDA, YOSHIHIRO; HASHIGUCHI, TERUTO; MARUYAMA, IKURO
2010-01-01
Estimation of the postmortem interval (PMI) is one of the most important tasks in forensic medicine. Numerous methods have been proposed for the determination of the time since death by chemical means. High mobility group box-1 (HMGB1), a nonhistone DNA-binding protein is released by eukaryotic cells upon necrosis. Postmortem serum levels of HMGB1 of 90 male Wistar rats stored at 4, 14 and 24°C since death were measured by enzyme-linked immunosorbent assay. The serum HMGB1 level showed a time-dependent increase up to seven days at 4°C. At 14°C, the HMGB1 level peaked at day 3, decreased at day 4, and then plateaued. At 24°C, the HMGB1 level peaked at day 2, decreased at day 3, and then plateaued. Our findings suggest that HMGB1 is related to the PMI in rats. PMID:23136602
Lee, Soojeong; Jeon, Gwanggil; Kang, Seokhoon
2015-01-01
Blood pressure (BP) is an important vital sign to determine the health of an individual. Although the estimation of average arterial blood pressure using oscillometric methods is possible, there are no established methods for obtaining confidence intervals (CIs) for systolic blood pressure (SBP) and diastolic blood pressure (DBP). In this paper, we propose a two-step pseudomaximum amplitude (TSPMA) as a novel approach to obtain improved CIs of SBP and DBP using a double bootstrap approach. The weighted median (WM) filter is employed to reduce impulsive and Gaussian noises in the step of preprocessing. Application of the proposed method provides tighter CIs and smaller standard deviation of CIs than the pseudomaximum amplitude-envelope and maximum amplitude algorithms with Student's t-method.
Directory of Open Access Journals (Sweden)
Soojeong Lee
2015-01-01
Full Text Available Blood pressure (BP is an important vital sign to determine the health of an individual. Although the estimation of average arterial blood pressure using oscillometric methods is possible, there are no established methods for obtaining confidence intervals (CIs for systolic blood pressure (SBP and diastolic blood pressure (DBP. In this paper, we propose a two-step pseudomaximum amplitude (TSPMA as a novel approach to obtain improved CIs of SBP and DBP using a double bootstrap approach. The weighted median (WM filter is employed to reduce impulsive and Gaussian noises in the step of preprocessing. Application of the proposed method provides tighter CIs and smaller standard deviation of CIs than the pseudomaximum amplitude-envelope and maximum amplitude algorithms with Student’s t-method.
Efficient estimation under privacy restrictions in the disclosure problem
Albers, Willem/Wim
1984-01-01
In the disclosure problem already collected data are disclosed only to such extent that the individual privacy is protected to at least a prescribed level. For this problem estimators are introduced which are both simple and efficient.
Estimation in the cox proportional hazards model with left-truncated and interval-censored data.
Pan, Wei; Chappell, Rick
2002-03-01
We show that the nonparametric maximum likelihood estimate (NPMLE) of the regression coefficient from the joint likelihood (of the regression coefficient and the baseline survival) works well for the Cox proportional hazards model with left-truncated and interval-censored data, but the NPMLE may underestimate the baseline survival. Two alternatives are also considered: first, the marginal likelihood approach by extending Satten (1996, Biometrika 83, 355-370) to truncated data, where the baseline distribution is eliminated as a nuisance parameter; and second, the monotone maximum likelihood estimate that maximizes the joint likelihood by assuming that the baseline distribution has a nondecreasing hazard function, which was originally proposed to overcome the underestimation of the survival from the NPMLE for left-truncated data without covariates (Tsai, 1988, Biometrika 75, 319-324). The bootstrap is proposed to draw inference. Simulations were conducted to assess their performance. The methods are applied to the Massachusetts Health Care Panel Study data set to compare the probabilities of losing functional independence for male and female seniors.
An Evaluation of Alternate Feed Efficiency Estimates in Beef Cattle
Boaitey, Albert; Goddard, Ellen; Mohapatra, Sandeep; Basarab, John A; Miller, Steve; Crowley, John
2013-01-01
In this paper the issue of nonlinearity and heterogeneity in the derivation of feed efficiency estimates for beef cattle based on performance data for 6253 animals is examined. Using parametric, non-parametric and integer programming approaches, we find evidence of nonlinearity between feed intake and measures of size and growth, and susceptibility of, feed efficiency estimates to assumptions pertaining to heterogeneity between animals and within cohorts. Further, differences in feed cost imp...
Estimating Production Technical Efficiency of Irvingia Seed (Ogbono ...
African Journals Online (AJOL)
This study estimated the production technical efficiency of irvingia seed (Ogbono) farmers in Nsukka agricultural zone in Enugu State, Nigeria. This is against the backdrop of the importance of efficiency as a factor of productivity in a growing economy like Nigeria where resources are scarce and opportunities for new ...
Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.
2008-01-01
In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910
Yoshitaka Sasase; Tatsuya Kubokawa
2005-01-01
This paper addresses the issue of constructing a confidence interval of a small area mean in a random effect or mixed effects linear model. A crude confidence interval based on the empirical Bayes method has the drawback that its coverage probability is much less than a nominal confidence coefficient. For improving on this confidence interval, the paper provides the procedure of adjusting the critical value, and the resulting confidence interval has a coverage probability which is identical t...
Estimating the generation interval of influenza A (H1N1) in a range of social settings.
te Beest, Dennis E; Wallinga, Jacco; Donker, Tjibbe; van Boven, Michiel
2013-03-01
A proper understanding of the infection dynamics of influenza A viruses hinges on the availability of reliable estimates of key epidemiologic parameters such as the reproduction number, intrinsic growth rate, and generation interval. Often the generation interval is assumed to be similar in different settings although there is little evidence justifying this. Here we estimate the generation interval for stratifications based on age, cluster size, and social setting (camp, school, workplace, household) using data from 16 clusters of Novel Influenza A (H1N1) in the Netherlands. Our analyses are based on a Bayesian inferential framework, enabling flexible handling of both missing infection links and missing times of symptoms onset. The analysis indicates that a stratification that allows the generation interval to differ by social setting fits the data best. Specifically, the estimated generation interval was shorter in households (2.1 days [95% credible interval = 1.6-2.9]) and camps (2.3 days [1.4-3.4]) than in workplaces (2.7 days [1.9-3.7]) and schools (3.4 days [2.5-4.5]). Our findings could be the result of differences in the number of contacts between settings, differences in prophylactic use of antivirals between settings, and differences in underreporting.
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Efficient Estimation of Nonparametric Genetic Risk Function with Censored Data.
Wang, Yuanjia; Liang, Baosheng; Tong, Xingwei; Marder, Karen; Bressman, Susan; Orr-Urtreger, Avi; Giladi, Nir; Zeng, Donglin
2015-09-01
With an increasing number of causal genes discovered for complex human disorders, it is crucial to assess the genetic risk of disease onset for individuals who are carriers of these causal mutations and compare the distribution of age-at-onset with that in non-carriers. In many genetic epidemiological studies aiming at estimating causal gene effect on disease, the age-at-onset of disease is subject to censoring. In addition, some individuals' mutation carrier or non-carrier status can be unknown due to the high cost of in-person ascertainment to collect DNA samples or death in older individuals. Instead, the probability of these individuals' mutation status can be obtained from various sources. When mutation status is missing, the available data take the form of censored mixture data. Recently, various methods have been proposed for risk estimation from such data, but none is efficient for estimating a nonparametric distribution. We propose a fully efficient sieve maximum likelihood estimation method, in which we estimate the logarithm of the hazard ratio between genetic mutation groups using B-splines, while applying nonparametric maximum likelihood estimation for the reference baseline hazard function. Our estimator can be calculated via an expectation-maximization algorithm which is much faster than existing methods. We show that our estimator is consistent and semiparametrically efficient and establish its asymptotic distribution. Simulation studies demonstrate superior performance of the proposed method, which is applied to the estimation of the distribution of the age-at-onset of Parkinson's disease for carriers of mutations in the leucine-rich repeat kinase 2 gene.
Computationally Efficient and Noise Robust DOA and Pitch Estimation
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
signals are often contaminated by different types of noise, which challenges the assumption of white Gaussian noise in most state-of-the-art methods. We establish filtering methods based on noise statistics to apply to nonparametric spectral and spatial parameter estimates of the harmonics. We design...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...
Efficient Bayesian Estimation and Combination of GARCH-Type Models
D. David (David); L.F. Hoogerheide (Lennart)
2010-01-01
textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
Stoichiometric estimates of the biochemical conversion efficiencies in tsetse metabolism
Directory of Open Access Journals (Sweden)
Custer Adrian V
2005-08-01
Full Text Available Abstract Background The time varying flows of biomass and energy in tsetse (Glossina can be examined through the construction of a dynamic mass-energy budget specific to these flies but such a budget depends on efficiencies of metabolic conversion which are unknown. These efficiencies of conversion determine the overall yields when food or storage tissue is converted into body tissue or into metabolic energy. A biochemical approach to the estimation of these efficiencies uses stoichiometry and a simplified description of tsetse metabolism to derive estimates of the yields, for a given amount of each substrate, of conversion product, by-products, and exchanged gases. This biochemical approach improves on estimates obtained through calorimetry because the stoichiometric calculations explicitly include the inefficiencies and costs of the reactions of conversion. However, the biochemical approach still overestimates the actual conversion efficiency because the approach ignores all the biological inefficiencies and costs such as the inefficiencies of leaky membranes and the costs of molecular transport, enzyme production, and cell growth. Results This paper presents estimates of the net amounts of ATP, fat, or protein obtained by tsetse from a starting milligram of blood, and provides estimates of the net amounts of ATP formed from the catabolism of a milligram of fat along two separate pathways, one used for resting metabolism and one for flight. These estimates are derived from stoichiometric calculations constructed based on a detailed quantification of the composition of food and body tissue and on a description of the major metabolic pathways in tsetse simplified to single reaction sequences between substrates and products. The estimates include the expected amounts of uric acid formed, oxygen required, and carbon dioxide released during each conversion. The calculated estimates of uric acid egestion and of oxygen use compare favorably to
Ying Ouyang; Prem B. Parajuli; Daniel A. Marion
2013-01-01
Pollution of surface water with harmful chemicals and eutrophication of rivers and lakes with excess nutrients are serious environmental concerns. This study estimated surface water quality in a stream within the Yazoo River Basin (YRB), Mississippi, USA, using the duration curve and recurrence interval analysis techniques. Data from the US Geological Survey (USGS)...
Myburgh, Jolandie; L'Abbé, Ericka N; Steyn, Maryna; Becker, Piet J
2013-06-10
The validity of the method in which total body score (TBS) and accumulated degree-days (ADD) are used to estimate the postmortem interval (PMI) is examined. TBS and ADD were recorded for 232 days in northern South Africa, which has temperatures between 17 and 28 °C in summer and 6 and 20 °C in winter. Winter temperatures rarely go below 0°C. Thirty pig carcasses, which weighed between 38 and 91 kg, were used. TBS was scored using the modified method of Megyesi et al. [1]. Temperature was acquired from an on site data logger and the weather station bureau; differences between these two sources were not statistically significant. Using loglinear random-effects maximum likelihood regression, an r(2) value for ADD (0.6227) was produced and linear regression formulae to estimate PMI from ADD with a 95% prediction interval were developed. The data of 16 additional pigs that were placed a year later were then used to validate the accuracy of this method. The actual PMI and ADD were compared to the estimated PMI and ADD produced by the developed formulae as well as the estimated PMIs within the 95% prediction interval. A validation of the study produced poor results as only one pig of 16 fell within the 95% interval when using the formulae, showing that ADD has limited use in the prediction of PMI in a South African setting. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Lesnoff, Matthieu
2008-06-15
Demographic parameters are useful for assessing productivity and dynamics of tropical livestock populations. Common parameters are the annual instantaneous hazard rates, which can be estimated by m/T (where m represents the number of the considered demographic events occurred during the year and T the cumulated animal-time at risk). Different approaches are encountered in the literature for computing T from on-farm survey data. One crude approach ("the 12-month interval approach") only uses estimations of herds' sizes at beginning and end of the year and aggregated counts of demographic events over the year. I evaluated the potential biases in using four 12-month interval methods (M1-M4) to estimate T. Biases were evaluated by comparing the 12-month estimates to gold-standard values of T. Data came from long-term herd monitoring on cattle and small ruminants in extensive agro-pastoral systems. Animal-times at risks were correctly estimated in average by methods M1, M2 and M4 (average relative biases
Estimating the Efficiency of Phosphopeptide Identification by Tandem Mass Spectrometry
Hsu, Chuan-Chih; Xue, Liang; Arrington, Justine V.; Wang, Pengcheng; Paez Paez, Juan Sebastian; Zhou, Yuan; Zhu, Jian-Kang; Tao, W. Andy
2017-06-01
Mass spectrometry has played a significant role in the identification of unknown phosphoproteins and sites of phosphorylation in biological samples. Analyses of protein phosphorylation, particularly large scale phosphoproteomic experiments, have recently been enhanced by efficient enrichment, fast and accurate instrumentation, and better software, but challenges remain because of the low stoichiometry of phosphorylation and poor phosphopeptide ionization efficiency and fragmentation due to neutral loss. Phosphoproteomics has become an important dimension in systems biology studies, and it is essential to have efficient analytical tools to cover a broad range of signaling events. To evaluate current mass spectrometric performance, we present here a novel method to estimate the efficiency of phosphopeptide identification by tandem mass spectrometry. Phosphopeptides were directly isolated from whole plant cell extracts, dephosphorylated, and then incubated with one of three purified kinases—casein kinase II, mitogen-activated protein kinase 6, and SNF-related protein kinase 2.6—along with 16O4- and 18O4-ATP separately for in vitro kinase reactions. Phosphopeptides were enriched and analyzed by LC-MS. The phosphopeptide identification rate was estimated by comparing phosphopeptides identified by tandem mass spectrometry with phosphopeptide pairs generated by stable isotope labeled kinase reactions. Overall, we found that current high speed and high accuracy mass spectrometers can only identify 20%-40% of total phosphopeptides primarily due to relatively poor fragmentation, additional modifications, and low abundance, highlighting the urgent need for continuous efforts to improve phosphopeptide identification efficiency. [Figure not available: see fulltext.
An Efficient Estimator for the Expected Value of Sample Information.
Menzies, Nicolas A
2016-04-01
Conventional estimators for the expected value of sample information (EVSI) are computationally expensive or limited to specific analytic scenarios. I describe a novel approach that allows efficient EVSI computation for a wide range of study designs and is applicable to models of arbitrary complexity. The posterior parameter distribution produced by a hypothetical study is estimated by reweighting existing draws from the prior distribution. EVSI can then be estimated using a conventional probabilistic sensitivity analysis, with no further model evaluations and with a simple sequence of calculations (Algorithm 1). A refinement to this approach (Algorithm 2) uses smoothing techniques to improve accuracy. Algorithm performance was compared with the conventional EVSI estimator (2-level Monte Carlo integration) and an alternative developed by Brennan and Kharroubi (BK), in a cost-effectiveness case study. Compared with the conventional estimator, Algorithm 2 exhibited a root mean square error (RMSE) 8%-17% lower, with far fewer model evaluations (3-4 orders of magnitude). Algorithm 1 produced results similar to those of the conventional estimator when study evidence was weak but underestimated EVSI when study evidence was strong. Compared with the BK estimator, the proposed algorithms reduced RSME by 18%-38% in most analytic scenarios, with 40 times fewer model evaluations. Algorithm 1 performed poorly in the context of strong study evidence. All methods were sensitive to the number of samples in the outer loop of the simulation. The proposed algorithms remove two major challenges for estimating EVSI--the difficulty of estimating the posterior parameter distribution given hypothetical study data and the need for many model evaluations to obtain stable and unbiased results. These approaches make EVSI estimation feasible for a wide range of analytic scenarios. © The Author(s) 2015.
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval
Directory of Open Access Journals (Sweden)
Andrei ACHIMAŞ CADARIU
2004-08-01
Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.
Cantürk, İsmail; Karabiber, Fethullah; Çelik, Safa; Şahin, M Feyzi; Yağmur, Fatih; Kara, Sadık
2016-02-01
In forensic medicine, estimation of the time of death (ToD) is one of the most important and challenging medico-legal problems. Despite the partial accomplishments in ToD estimations to date, the error margin of ToD estimation is still too large. In this study, electrical conductivity changes were experimentally investigated in the postmortem interval in human cases. Electrical conductivity measurements give some promising clues about the postmortem interval. A living human has a natural electrical conductivity; in the postmortem interval, intracellular fluids gradually leak out of cells. These leaked fluids combine with extra-cellular fluids in tissues and since both fluids are electrolytic, intracellular fluids help increase conductivity. Thus, the level of electrical conductivity is expected to increase with increased time after death. In this study, electrical conductivity tests were applied for six hours. The electrical conductivity of the cases exponentially increased during the tested time period, indicating a positive relationship between electrical conductivity and the postmortem interval. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r→Z transform.
Prado, D.M.L.; Rocco,E.A.; SILVA, A. G.; Rocco, D.F.; Pacheco, M.T.; Silva,P.F.; Furlan, V
2016-01-01
The oxygen uptake efficiency slope (OUES) is a submaximal index incorporating cardiovascular, peripheral, and pulmonary factors that determine the ventilatory response to exercise. The purpose of this study was to evaluate the effects of continuous exercise training and interval exercise training on the OUES in patients with coronary artery disease. Thirty-five patients (59.3±1.8 years old; 28 men, 7 women) with coronary artery disease were randomly divided into two groups: continuous exercis...
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Raykov, Tenko
2010-05-01
An interval estimation procedure is outlined that can be used for evaluating the proportion of observed variance in a response variable, which is due to the third level of nesting in a hierarchical design. The approach is also useful when it is of concern to address the necessity of including a third level in analyses of data from a multi-level study, relative to an alternative of proceeding with two-level modelling. The proposed method is illustrated with an empirical example.
... Weston M, et al. Effects of low-volume high-intensity interval training (HIT) on fitness in adults: A meta-analysis ... 2014;44:1005. Gillen JB, et al. Is high-intensity interval training a time-efficient exercise strategy to improve health ...
Directory of Open Access Journals (Sweden)
Awad Abd El-Halim
2013-06-01
Full Text Available Alternate furrow irrigation with proper irrigation intervals could save irrigation water and result in high grain yield with low irrigation costs in arid areas. Two field experiments were conducted in the Middle Nile Delta area of Egypt during the 2010 and 2011 seasons to investigate the impact of alternate furrow irrigation with 7-d (AFI7 and 14-d intervals (AFI14 on yield, crop water use efficiency, irrigation water productivity, and economic return of corn (Zea mays L. as compared with every-furrow irrigation (EFI, conventional method with 14-d interval. Results indicated that grain yield increased under the AFI7 treatment, whereas it tended to decrease under AFI14 as compared with EFI. Irrigation water saving in the AFI7 and AFI14 treatments was approximately 7% and 17%, respectively, as compared to the EFI treatment. The AFI14 and AFI7 treatments improved both crop water use efficiency and irrigation water productivity as compared with EFI. Results also indicated that the AFI7 treatment did not only increase grain yield, but also increased the benefit-cost ratio, net return, and irrigation water saving. Therefore, if low cost water is available and excess water delivery to the field does not require any additional expense, then the AFI7 treatment will essentially be the best choice under the study area conditions.
Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets
Sun, Ying
2014-11-07
For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.
Efficient Estimation of Smooth Distributions From Coarsely Grouped Data
DEFF Research Database (Denmark)
Rizzi, Silvia; Gampe, Jutta; Eilers, Paul H C
2015-01-01
Ungrouping binned data can be desirable for many reasons: Bins can be too coarse to allow for accurate analysis; comparisons can be hindered when different grouping approaches are used in different histograms; and the last interval is often wide and open-ended and, thus, covers a lot of information...... in the tail area. Age group-specific disease incidence rates and abridged life tables are examples of binned data. We propose a versatile method for ungrouping histograms that assumes that only the underlying distribution is smooth. Because of this modest assumption, the approach is suitable for most...... to the estimation of rates when both the event counts and the exposures to risk are grouped....
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
Nomogram estimates boiler and fired-heater efficiencies
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V.
1984-06-01
A nomogram permits quick estimates of the efficiency of boilers and fired heaters based on the lower heating value of the fuel. It is valid for coals, oil and natural gas. The paper presents a formula which describes the weight of air required to burn 1,000,000 Btu input of fuel and also presents a formula for flue gas production per pound of fuel. To use the nomogram it is necessary to know higher and lower heating values of the fuel, amount of excess air, and the exit gas ambient air temperatures.
Gillen, Jenna B; Gibala, Martin J
2014-03-01
Growing research suggests that high-intensity interval training (HIIT) is a time-efficient exercise strategy to improve cardiorespiratory and metabolic health. "All out" HIIT models such as Wingate-type exercise are particularly effective, but this type of training may not be safe, tolerable or practical for many individuals. Recent studies, however, have revealed the potential for other models of HIIT, which may be more feasible but are still time-efficient, to stimulate adaptations similar to more demanding low-volume HIIT models and high-volume endurance-type training. As little as 3 HIIT sessions per week, involving ≤10 min of intense exercise within a time commitment of ≤30 min per session, including warm-up, recovery between intervals and cool down, has been shown to improve aerobic capacity, skeletal muscle oxidative capacity, exercise tolerance and markers of disease risk after only a few weeks in both healthy individuals and people with cardiometabolic disorders. Additional research is warranted, as studies conducted have been relatively short-term, with a limited number of measurements performed on small groups of subjects. However, given that "lack of time" remains one of the most commonly cited barriers to regular exercise participation, low-volume HIIT is a time-efficient exercise strategy that warrants consideration by health practitioners and fitness professionals.
Efficient AM Algorithms for Stochastic ML Estimation of DOA
Directory of Open Access Journals (Sweden)
Haihua Chen
2016-01-01
Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-12-01
Full Text Available It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Park, Jae Phil; Park, Chanseok; Cho, Jongweon; Bahn, Chi Bum
2016-12-23
It is extremely difficult to predict the initiation time of cracking due to a large time spread in most cracking experiments. Thus, probabilistic models, such as the Weibull distribution, are usually employed to model the initiation time of cracking. Therefore, the parameters of the Weibull distribution are estimated from data collected from a cracking test. However, although the development of a reliable cracking model under ideal experimental conditions (e.g., a large number of specimens and narrow censoring intervals) could be achieved in principle, it is not straightforward to quantitatively assess the effects of the ideal experimental conditions on model estimation uncertainty. The present study investigated the effects of key experimental conditions, including the time-dependent effect of the censoring interval length, on the estimation uncertainties of the Weibull parameters through Monte Carlo simulations. The simulation results provided quantified estimation uncertainties of Weibull parameters in various cracking test conditions. Hence, it is expected that the results of this study can offer some insight for experimenters developing a probabilistic crack initiation model by performing experiments.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Li, Yanhong; Koval, John J; Donner, Allan; Zou, G Y
2010-10-30
The area (A) under the receiver operating characteristic curve is commonly used to quantify the ability of a biomarker to correctly classify individuals into two populations. However, many markers are subject to measurement error, which must be accounted for to prevent understating their effectiveness. In this paper, we develop a new confidence interval procedure for A which is adjusted for measurement error using either external or internal replicated measurements. Based on the observation that A is a function of normal means and variances, we develop the procedure by recovering variance estimates needed from confidence limits for normal means and variances. Simulation results show that the procedure performs better than the previous ones based on the delta-method in terms of coverage percentage, balance of tail errors and interval width. Two examples are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Gary T Smith
Full Text Available To use clinically measured reproducibility of volumetric CT (vCT of lung nodules to estimate error in nodule growth rate in order to determine optimal scan interval for patient follow-up.We performed quantitative vCT on 89 stable non-calcified nodules and 49 calcified nodules measuring 3-13 mm diameter in 71 patients who underwent 3-9 repeat vCT studies for clinical evaluation of pulmonary nodules. Calculated volume standard deviation as a function of mean nodule volume was used to compute error in estimated growth rate. This error was then used to determine the optimal patient follow-up scan interval while fixing the false positive rate at 5%.Linear regression of nodule volume standard deviation versus the mean nodule volume for stable non-calcified nodules yielded a slope of 0.057 ± 0.002 (r2 = 0.79, p<0.001. For calcified stable nodules, the regression slope was 0.052 ± 0.005 (r2 = 0.65, p = 0.03. Using this with the error propagation formula, the optimal patient follow-up scan interval was calculated to be 81 days, independent of initial nodule volume.Reproducibility of vCT is excellent, and the standard error is proportional to the mean calculated nodule volume for the range of nodules examined. This relationship constrains statistical certainty of vCT calculated doubling times and results in an optimal scan interval that is independent of the initial nodule volume.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Rosner, Bernard; Glynn, Robert J
2007-02-10
The Spearman (rho(s)) and Kendall (tau) rank correlation coefficient are routinely used as measures of association between non-normally distributed random variables. However, confidence limits for rho(s) are only available under the assumption of bivariate normality and for tau under the assumption of asymptotic normality of tau. In this paper, we introduce another approach for obtaining confidence limits for rho(s) or tau based on the arcsin transformation of sample probit score correlations. This approach is shown to be applicable for an arbitrary bivariate distribution. The arcsin-based estimators for rho(s) and tau (denoted by rho(s,a), tau(a)) are shown to have asymptotic relative efficiency (ARE) of 9/pi2 compared with the usual estimators rho(s) and tau when rho(s) and tau are, respectively, 0. In some nutritional applications, the Spearman rank correlation between nutrient intake as assessed by a reference instrument versus nutrient intake as assessed by a surrogate instrument is used as a measure of validity of the surrogate instrument. However, if only a single replicate (or a few replicates) are available for the reference instrument, then the estimated Spearman rank correlation will be downwardly biased due to measurement error. In this paper, we use the probit transformation as a tool for specifying an ANOVA-type model for replicate ranked data resulting in a point and interval estimate of a measurement error corrected rank correlation. This extends previous work by Rosner and Willett for obtaining point and interval estimates of measurement error corrected Pearson correlations. 2006 John Wiley & Sons, Ltd.
Weston, Matthew; Weston, Kathryn L; Prentis, James M; Snowden, Chris P
2016-01-01
The advancement of perioperative medicine is leading to greater diversity in development of pre-surgical interventions, implemented to reduce patient surgical risk and enhance post-surgical recovery. Of these interventions, the prescription of pre-operative exercise training is gathering momentum as a realistic means for enhancing patient surgical outcome. Indeed, the general benefits of exercise training have the potential to pre-operatively optimise several pre-surgical risks factors, including cardiorespiratory function, frailty and cognitive function. Any exercise programme incorporated into the pre-operative pathway of care needs to be effective and time efficient in that any fitness gains are achievable in the limited period between the decision for surgery and operation (e.g. 4 weeks). Fortunately, there is a large volume of research describing effective and time-efficient exercise training programmes within the discipline of sports science. Accordingly, the objective of our commentary is to synthesise contemporary exercise training research, both from non-clinical and clinical populations, with the overarching aim of informing the development of effective and time-efficient pre-surgical exercise training programmes. The development of such exercise training programmes requires the careful consideration of several key principles, namely frequency, intensity, time, type and progression of exercise. Therefore, in light of more recent evidence demonstrating the effectiveness and time efficiency of high-intensity interval training-which involves brief bouts of intense exercise interspersed with longer recovery periods-the principles of exercise training programme design will be discussed mainly in the context of such high-intensity interval training programmes. Other issues pertinent to the development, implementation and evaluation of pre-operative exercise training programmes, such as individual exercise prescription, training session monitoring and potential
An efficient algebraic approach to observability analysis in state estimation
Energy Technology Data Exchange (ETDEWEB)
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.
Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A
2017-10-15
Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online.
Efficient Estimation of the Impact of Observing Systems using EFSO
Kalnay, E.; Chen, T. C.; Jung, J.; Hotta, D.
2016-12-01
Massive amounts of observations are being assimilated every day into modern Numerical Weather Prediction (NWP) systems. This makes difficult to estimate the impact of a new observing system with Observing System Experiments (OSEs) because there is already so much information provided by existing observations. In addition, the large volume of data also prevents monitoring the impact of each assimilated observation with OSEs. We demonstrate in this study how effectively the use of Ensemble Forecast Sensitivity to Observations (EFSO) can help to monitor and improve the impact of observations on the analyses and forecasts. In the first part, we show how to identify detrimental observations within each observing system using EFSO, which has been termed as Proactive Quality Control (PQC). The withdrawal of these detrimental observations leads to improved analyses and subsequent 5-day forecasts, which also serves as a verification of EFSO. We display the feasibility of PQC towards operational implementation. In the second part, it is found that in the estimated impact of MODIS polar winds, one of the contributors of detrimental observations, a positive u-component of the innovation, is associated with detrimental observations, whereas negative u-innovations are generally associated with beneficial impacts. Other biases associated with height, and other variables when the net impact is detrimental were also found. By contrast, such biases do not appear in systems using similar cloud drift wind algorithm, such as GOES satellite winds. The finding provides guidance towards improving the system and gives a clear example of efficient monitoring observations and testing new observing systems using EFSO. The potential of using EFSO to efficiently improve both observations and analyses is clearly shown in this study.
Parameter estimation and interval type-2 fuzzy sliding mode control of a z-axis MEMS gyroscope.
Fazlyab, Mahyar; Pedram, Maysam Zamani; Salarieh, Hassan; Alasty, Aria
2013-11-01
This paper reports a hybrid intelligent controller for application in single axis MEMS vibratory gyroscopes. First, unknown parameters of a micro gyroscope including unknown time varying angular velocity are estimated online via normalized continuous time least mean squares algorithm. Then, an additional interval type-2 fuzzy sliding mode control is incorporated in order to match the resonant frequencies and to compensate for undesired mechanical couplings. The main advantage of this control strategy is its robustness to parameters uncertainty, external disturbance and measurement noise. Consistent estimation of parameters is guaranteed and stability of the closed-loop system is proved via the Lyapunov stability theorem. Finally, numerical simulation is done in order to validate the effectiveness of the proposed method, both for a constant and time-varying angular rate. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Mandric, Igor; Temate-Tiagueu, Yvette; Shcheglova, Tatiana; Al Seesi, Sahar; Zelikovsky, Alex; Mandoiu, Ion I
2017-10-15
This note presents IsoEM2 and IsoDE2, new versions with enhanced features and faster runtime of the IsoEM and IsoDE packages for expression level estimation and differential expression. IsoEM2 estimates fragments per kilobase million (FPKM) and transcript per million (TPM) levels for genes and isoforms with confidence intervals through bootstrapping, while IsoDE2 performs differential expression analysis using the bootstrap samples generated by IsoEM2. Both tools are available with a command line interface as well as a graphical user interface (GUI) through wrappers for the Galaxy platform. The source code of this software suite is available at https://github.com/mandricigor/isoem2. The Galaxy wrappers are available at https://toolshed.g2.bx.psu.edu/view/saharlcc/isoem2_isode2/. imandric1@student.gsu.edu or ion@engr.uconn.edu. Supplementary data are available at Bioinformatics online.
ESTIMATION OF EFFICIENCY PARTNERSHIP LARGE AND SMALL BUSINESS
Directory of Open Access Journals (Sweden)
Олег Васильевич Чабанюк
2014-05-01
Full Text Available In this article, based on the definition of key factors and its components, developed an algorithm consistent, logically connected stages of the transition from the traditional enterprise to enterprise innovation typebased becoming intrapreneurship. Аnalysis of economic efficiency of innovative business idea is: based on the determination of experts the importance of the model parameters ensure the effectiveness of intrapreneurship by using methods of kvalimetricheskogo modeling expert estimates score calculated "efficiency intrapreneurship". On the author's projected optimum level indicator should exceed 0.5, but it should be noted that the achievement of this level is possible with the 2 - 3rd year of existence intraprenerskoy structure. The proposed method was tested in practice and can be used for the formation of intrapreneurship in large and medium-sized enterprises as one of the methods of implementation of the innovation activities of small businesses.DOI: http://dx.doi.org/10.12731/2218-7405-2013-10-50
Efficient Spectral Power Estimation on an Arbitrary Frequency Scale
Directory of Open Access Journals (Sweden)
F. Zaplata
2015-04-01
Full Text Available The Fast Fourier Transform is a very efficient algorithm for the Fourier spectrum estimation, but has the limitation of a linear frequency scale spectrum, which may not be suitable for every system. For example, audio and speech analysis needs a logarithmic frequency scale due to the characteristic of a human’s ear. The Fast Fourier Transform algorithms are not able to efficiently give the desired results and modified techniques have to be used in this case. In the following text a simple technique using the Goertzel algorithm allowing the evaluation of the power spectra on an arbitrary frequency scale will be introduced. Due to its simplicity the algorithm suffers from imperfections which will be discussed and partially solved in this paper. The implementation into real systems and the impact of quantization errors appeared to be critical and have to be dealt with in special cases. The simple method dealing with the quantization error will also be introduced. Finally, the proposed method will be compared to other methods based on its computational demands and its potential speed.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
Ahearn, Elizabeth A.
2004-01-01
Multiple linear-regression equations were developed to estimate the magnitudes of floods in Connecticut for recurrence intervals ranging from 2 to 500 years. The equations can be used for nonurban, unregulated stream sites in Connecticut with drainage areas ranging from about 2 to 715 square miles. Flood-frequency data and hydrologic characteristics from 70 streamflow-gaging stations and the upstream drainage basins were used to develop the equations. The hydrologic characteristics?drainage area, mean basin elevation, and 24-hour rainfall?are used in the equations to estimate the magnitude of floods. Average standard errors of prediction for the equations are 31.8, 32.7, 34.4, 35.9, 37.6 and 45.0 percent for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals, respectively. Simplified equations using only one hydrologic characteristic?drainage area?also were developed. The regression analysis is based on generalized least-squares regression techniques. Observed flows (log-Pearson Type III analysis of the annual maximum flows) from five streamflow-gaging stations in urban basins in Connecticut were compared to flows estimated from national three-parameter and seven-parameter urban regression equations. The comparison shows that the three- and seven- parameter equations used in conjunction with the new statewide equations generally provide reasonable estimates of flood flows for urban sites in Connecticut, although a national urban flood-frequency study indicated that the three-parameter equations significantly underestimated flood flows in many regions of the country. Verification of the accuracy of the three-parameter or seven-parameter national regression equations using new data from Connecticut stations was beyond the scope of this study. A technique for calculating flood flows at streamflow-gaging stations using a weighted average also is described. Two estimates of flood flows?one estimate based on the log-Pearson Type III analyses of the annual
Moffatt, Colin; Heaton, Viv; De Haan, Dorine
2016-01-01
The length or stage of development of blow fly (Diptera: Calliphoridae) larvae may be used to estimate a minimum postmortem interval, often by targeting the largest individuals of a species in the belief that they will be the oldest. However, natural variation in rate of development, and therefore length, implies that the size of the largest larva, as well as the number of larvae longer than any stated length, will be greater for larger cohorts. Length data from the blow flies Protophormia terraenovae and Lucilia sericata were collected from one field-based and two laboratory-based experiments. The field cohorts contained considerably more individuals than have been used for reference data collection in the literature. Cohorts were shown to have an approximately normal distribution. Summary statistics were derived from the collected data allowing the quantification of errors in development time which arise when different sized cohorts are compared through their largest larvae. These errors may be considerable and can lead to overestimation of postmortem intervals when making comparisons with reference data collected from smaller cohorts. This source of error has hitherto been overlooked in forensic entomology.
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
Iancu, Lavinia; Carter, David O; Junkins, Emily N; Purcarea, Cristina
2015-09-01
Considering the biogeographical characteristics of forensic entomology, and the recent development of forensic microbiology as a complementary approach for post-mortem interval estimation, the current study focused on characterizing the succession of necrophagous insect species and bacterial communities inhabiting the rectum and mouth cavities of swine (Sus scrofa domesticus) carcasses during a cold season outdoor experiment in an urban natural environment of Bucharest, Romania. We monitored the decomposition process of three swine carcasses during a 7 month period (November 2012-May 2013) corresponding to winter and spring periods of a temperate climate region. The carcasses, protected by wire cages, were placed on the ground in a park type environment, while the meteorological parameters were constantly recorded. The succession of necrophagous Diptera and Coleoptera taxa was monitored weekly, both the adult and larval stages, and the species were identified both by morphological and genetic characterization. The structure of bacterial communities from swine rectum and mouth tissues was characterized during the same time intervals by denaturing gradient gel electrophoresis (DGGE) and sequencing of 16S rRNA gene fragments. We observed a shift in the structure of both insect and bacterial communities, primarily due to seasonal effects and the depletion of the carcass. A total of 14 Diptera and 6 Coleoptera species were recorded on the swine carcasses, from which Calliphora vomitoria and C. vicina (Diptera: Calliphoridae), Necrobia violacea (Coleoptera: Cleridae) and Thanatophilus rugosus (Coleoptera: Silphidae) were observed as predominant species. The first colonizing wave, primarily Calliphoridae, was observed after 15 weeks when the temperature increased to 13°C. This was followed by Muscidae, Fanniidae, Anthomyiidae, Sepsidae and Piophilidae. Families belonging to Coleoptera Order were observed at week 18 when temperatures raised above 18°C, starting with
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore......, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods...
Directory of Open Access Journals (Sweden)
Jin Wang
2017-03-01
Full Text Available This article proposes a multiple-step fault estimation algorithm for hypersonic flight vehicles that uses an interval type-II Takagi–Sugeno fuzzy model. An interval type-II Takagi–Sugeno fuzzy model is developed to approximate the nonlinear dynamic system and handle the parameter uncertainties of hypersonic firstly. Then, a multiple-step time-varying additive fault estimation algorithm is designed to estimate time-varying additive elevator fault of hypersonic flight vehicles. Finally, the simulation is conducted in both aspects of modeling and fault estimation; the validity and availability of such method are verified by a series of the comparison of numerical simulation results.
Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina
2012-07-10
We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Brancamp, Tami U; Lewis, Kerry E; Watterson, Thomas
2010-11-01
To assess the nasalance/nasality relationship and Nasometer test sensitivity and specificity when nasality ratings are obtained with both equal appearing interval (EAI) and direct magnitude estimation (DME) scaling procedures. To test the linearity of the relationship between nasality ratings obtained from different perceptual scales. STIMULI: Audio recordings of the Turtle Passage. Participants' nasalance scores and audio recordings were obtained simultaneously. A single judge rated the samples for nasality using both EAI and DME scaling procedures. Thirty-nine participants 3 to 17 years of age. Across participants, resonance ranged from normal to severely hypernasal. Nasalance scores and two nasality ratings. The magnitude of the correlation between nasalance scores and EAI ratings of nasality (r = .63) and between nasalance and DME ratings of nasality (r = .59) was not significantly different. Nasometer test sensitivity and specificity for EAI-rated nasality were .71 and .73, respectively. For DME-rated nasality, sensitivity and specificity were .62 and .70, respectively. Regression of EAI nasality ratings on DME nasality ratings did not depart significantly from linearity. No difference was found in the relationship between nasalance and nasality when nasality was rated using EAI as opposed to DME procedures. Nasometer test sensitivity and specificity were similar for EAI- and DME-rated nasality. A linear model accounted for the greatest proportion of explained variance in EAI and DME ratings. Consequently, clinicians should be able to obtain valid and reliable estimates of nasality using EAI or DME.
On efficiency of some ratio estimators in double sampling design ...
African Journals Online (AJOL)
In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of analytic density under random censorship
Belitser, E.
1996-01-01
The nonparametric minimax estimation of an analytic density at a given point, under random censorship, is considered. Although the problem of estimating density is known to be irregular in a certain sense, we make some connections relating this problem to the problem of estimating smooth
Iancu, Lavinia; Sahlean, Tiberiu; Purcarea, Cristina
2016-01-01
The estimation of postmortem interval (PMI) is affected by several factors including the cause of death, the place where the body lay after death, and the weather conditions during decomposition. Given the climatic differences among biogeographic locations, the understanding of necrophagous insect species biology and ecology is required when estimating PMI. The current experimental model was developed in Romania during the warm season in an outdoor location. The aim of the study was to identify the necrophagous insect species diversity and dynamics, and to detect the bacterial species present during decomposition in order to determine if their presence or incidence timing could be useful to estimate PMI. The decomposition process of domestic swine carcasses was monitored throughout a 14-wk period (10 July-10 October 2013), along with a daily record of meteorological parameters. The chronological succession of necrophagous entomofauna comprised nine Diptera species, with the dominant presence of Chrysomya albiceps (Wiedemann 1819) (Calliphoridae), while only two Coleoptera species were identified, Dermestes undulatus (L. 1758) and Creophilus maxillosus Brahm 1970. The bacterial diversity and dynamics from the mouth and rectum tissues, and third-instar dipteran larvae were identified using denaturing gradient gel electrophoresis analysis and sequencing of bacterial 16S rRNA gene fragments. Throughout the decomposition process, two main bacterial chronological groups were differentiated, represented by Firmicutes and Gammaproteobacteria. Twenty-six taxa from the rectal cavity and 22 from the mouth cavity were identified, with the dominant phylum in both these cavities corresponding to Firmicutes. The present data strengthen the postmortem entomological and microbial information for the warm season in this temperate-continental area, as well as the role of microbes in carcass decomposition. © The Authors 2015. Published by Oxford University Press on behalf of
An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files
Directory of Open Access Journals (Sweden)
Anthony Chan
2008-01-01
Full Text Available A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events. These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughly proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage. The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.
Efficient estimation of the partly linear additive Cox model
Huang, Jian
1999-01-01
The partly linear additive Cox model is an extension of the (linear) Cox model and allows flexible modeling of covariate effects semiparametrically. We study asymptotic properties of the maximum partial likelihood estimator of this model with right-censored data using polynomial splines. We show that, with a range of choices of the smoothing parameter (the number of spline basis functions) required for estimation of the nonparametric components, the estimator of the finite-d...
Efficient collaborative sparse channel estimation in massive MIMO
Masood, Mudassir
2015-08-12
We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.
Control grid motion estimation for efficient application of optical flow
Zwart, Christine M
2012-01-01
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable compu
In this work, we address uncertainty analysis for a model, presented in a companion paper, quantifying the effect of soil moisture and plant development on soybean (Glycine max (L.) Merr.) leaf conductance. To achieve this we present several methods for confidence interval estimation. Estimation ...
Butcher, J B; Moore, H E; Day, C R; Adam, C D; Drijfhout, F P
2013-10-10
In analytical chemistry large datasets are collected using a variety of instruments for multiple tasks, where manual analysis can be time-consuming. Ideally, it is desirable to automate this process while obtaining an acceptable level of accuracy, two aims that artificial neural networks (ANNs) can fulfil. ANNs possess the ability to classify novel data based on their knowledge of the domain to which they have been exposed. ANNs can also analyse non-linear data, tolerate noise within data and are capable of reducing time taken to classify large amounts of novel data once trained, making them well-suited to the field of analytical chemistry where large datasets are present (such as that collected from gas chromatography-mass spectrometry (GC-MS)). In this study, the use of ANNs for the autonomous analysis of GC-MS profiles of Lucilia sericata larvae is investigated, where ANNs are required to estimate the age of the larvae to aid in the estimation of the post mortem interval (PMI). Two ANN analysis approaches are presented, where the ANN correctly classified the data with accuracy scores of 80.8% and 87.7% and Cohen's Kappa coefficients of 0.78 and 0.86. Inspection of these results shows the ANN to confuse two consecutive days which are of the same life stage and as a result are very similar in their chemical profile, which can be expected. The grouping of these two days into one class further improved results where accuracy scores 89% and 97.5% were obtained for the two analysis approaches. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
System of Indicators in Social and Economic Estimation of the Regional Energy Efficiency
Directory of Open Access Journals (Sweden)
Ivan P. Danilov
2012-10-01
Full Text Available The article offers social and economic interpretation of the energy efficiency, modeling of the system of indicators in estimation of the regional social and economic efficiency of the energy resources use.
Estimating allocative efficiency in port authorities with demand uncertainty
Hidalgo, Soraya; Núñez-Sánchez, Ramón
2012-01-01
This paper aims to analyse the impact of port demand variability on the allocative efficiency of Spanish port authorities during the period 1986-2007. From a distance function model we can obtain a measure of allocative efficiency using two different approaches: error components approach and parametric approach. We model the variability of port demand from the cyclical component of traffic series by applying the Hodrick-Prescott filter. The results show that the inclusion of variability does ...
pathChirp: Efficient Available Bandwidth Estimation for Network Paths
Energy Technology Data Exchange (ETDEWEB)
Cottrell, Les
2003-04-30
This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sender and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.
Estimation procedure of the efficiency of the heat network segment
Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.
2017-07-01
An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in
Energy-Efficient Channel Estimation in MIMO Systems
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.
Efficient estimation of overflow probabilities in queues with breakdowns
Kroese, Dirk; Nicola, V.F.
1999-01-01
Efficient importance sampling methods are proposed for the simulation of a single server queue with server breakdowns. The server is assumed to alternate between the operational and failure states according to a continuous time Markov chain. Both, continuous (fluid flow) and discrete (single
Agent-based Security and Efficiency Estimation in Airport Terminals
Janssen, S.A.M.
We investigate the use of an Agent-based framework to identify and quantify the relationship between security and efficiency within airport terminals. In this framework, we define a novel Security Risk Assessment methodology that explicitly models attacker and defender behavior in a security
Efficiency of wear and decalcification technique for estimating the ...
Indian Academy of Sciences (India)
*Corresponding author (Fax, +55-11-30917529; Email, nvsydney@gmail.com). Most techniques used for estimating the age of Sotalia guianensis (van Bénéden, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was ...
Sampling strategies for efficient estimation of tree foliage biomass
Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson
2011-01-01
Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...
Efficient Eye Typing with 9-direction Gaze Estimation
Zhang, Chi; Yao, Rui; Cai, Jinpeng
2017-01-01
Vision based text entry systems aim to help disabled people achieve text communication using eye movement. Most previous methods have employed an existing eye tracker to predict gaze direction and design an input method based upon that. However, these methods can result in eye tracking quality becoming easily affected by various factors and lengthy amounts of time for calibration. Our paper presents a novel efficient gaze based text input method, which has the advantage of low cost and robust...
Efficient estimation of burst-mode LDA power spectra
DEFF Research Database (Denmark)
Velte, Clara Marika; George, William K
2010-01-01
and correlations is included, as well as one regarding the statistical convergence of the spectral estimator for random sampling. Further, the basic representation of the burst-mode LDA signal has been revisited due to observations in recent years of particles not following the flow (e.g., particle clustering......The estimation of power spectra from LDA data provides signal processing challenges for fluid dynamicists for several reasons. Acquisition is dictated by randomly arriving particles which cause the signal to be highly intermittent. This both creates self-noise and causes the measured velocities...... to be biased due to the statistical dependence on the velocity and when the particle arrives. This leads to incorrect moments when the data are evaluated by arithmetically averaging. The signal can be interpreted correctly, however, by applying residence time weighting to all statistics, which eliminates...
Directory of Open Access Journals (Sweden)
Cecília Kosmann
2011-12-01
Full Text Available ABSTRACT. Chrysomya albiceps (Wiedemann and Hemilucilia segmentaria (Fabricius (Diptera, Calliphoridae used to estimate the postmortem interval in a forensic case in Minas Gerais, Brazil. The corpse of a man was found in a Brazilian highland savanna (cerrado in the state of Minas Gerais. Fly larvae were collected at the crime scene and arrived at the laboratory three days afterwards. From the eight pre-pupae, seven adults of Chrysomya albiceps (Wiedemann, 1819 emerged and, from the two larvae, two adults of Hemilucilia segmentaria (Fabricius, 1805 were obtained. As necrophagous insects use corpses as a feeding resource, their development rate can be used as a tool to estimate the postmortem interval. The post-embryonary development stage of the immature collected on the body was estimated as the difference between the total development time and the time required for them to become adults in the lab. The estimated age of the maggots from both species and the minimum postmortem interval were four days. This is the first time that H. segmentaria is used to estimate the postmortem interval in a forensic case.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses
Combining ability estimates of sulfate uptake efficiency in maize.
Motto, M; Saccomani, M; Cacco, G
1982-03-01
Plant root nutrient uptake efficiency may be expressed by the kinetic parameters, Vmax and Km, as well as by normal enzymatic reactions. These parameters are apparently useful indices of the level of adaptation of genotypes to the nutrient conditions in the soil. Moreover, sulfate uptake capacity has been considered a valuable index for selecting superior hybrid characterized by both high grain yield and efficiency in nutrient uptake. Therefore, the purpose of this research was to determine combining ability for sulfate uptake, in a diallel series of maize hybrids among five inbreds. Wide differences among the 20 single crosses were obtained for Vmax and Km. The general and specific combining ability mean squares were significant and important for each trait, indicating the presence of considerable amount of both additive and nonadditive gene effects in the control of sulfate uptake. In addition, maternal and nonmaternal components of F1 reciprocal variation showed sizeable effects on all the traits considered. A relatively high correlation was also detected between Vmax and Km. However, both traits displayed enough variation to suggest that simultaneous improvement of both Vmax and Km should be feasible. A further noteworthy finding in this study was the identification of one inbred line, which was the best overall parent for improving both affinity and velocity strategies of sulfate uptake.
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Soojeong Lee
2014-01-01
Confidence intervals (CIs) are generally not provided along with estimated systolic blood pressure (SBP) and diastolic blood pressure (DBP) measured using oscillometric blood pressure devices. No criteria exist to determine the CI from a small sample set of oscillometric blood pressure measurements. We provide an extended methodology to improve estimation of CIs of SBP and DBP based on a nonparametric bootstrap-after-jackknife function and a Bayesian approach. We use the nonparametric bootstr...
Efficient Topology Estimation for Large Scale Optical Mapping
Elibol, Armagan; Garcia, Rafael
2013-01-01
Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...
A Concept of Approximated Densities for Efficient Nonlinear Estimation
Directory of Open Access Journals (Sweden)
Virginie F. Ruiz
2002-10-01
Full Text Available This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD. The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.
Motion estimation for video coding efficient algorithms and architectures
Chakrabarti, Indrajit; Chatterjee, Sumit Kumar
2015-01-01
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
Numerically efficient estimation of relaxation effects in magnetic particle imaging.
Rückert, Martin A; Vogel, Patrick; Jakob, Peter M; Behr, Volker C
2013-12-01
Current simulations of the signal in magnetic particle imaging (MPI) are either based on the Langevin function or on directly measuring the system function. The former completely ignores the influence of finite relaxation times of magnetic particles, and the latter requires time-consuming reference scans with an existing MPI scanner. Therefore, the resulting system function only applies for a given tracer type and the properties of the applied scanning trajectory. It requires separate reference scans for different trajectories and does not allow simulating theoretical magnetic particle suspensions. The most accessible and accurate way for including relaxation effects in the signal simulation would be using the Langevin equation. However, this is a very time-consuming approach because it calculates the stochastic dynamics of the individual particles and averages over large particle ensembles. In the current article, a numerically efficient way for approximating the averaged Langevin equation is proposed, which is much faster than the approach based on the Langevin equation because it is directly calculating the averaged time evolution of the magnetization. The proposed simulation yields promising results. Except for the case of small orthogonal offset fields, a high agreement with the full but significantly slower simulation could be shown.
Lui, Kung-Jong; Chang, Kuang-Chao
2015-01-01
When comparing two doses of a new drug with a placebo, we may consider using a crossover design subject to the condition that the high dose cannot be administered before the low dose. Under a random-effects logistic regression model, we focus our attention on dichotomous responses when the high dose cannot be used first under a three-period crossover trial. We derive asymptotic test procedures for testing equality between treatments. We further derive interval estimators to assess the magnitude of the relative treatment effects. We employ Monte Carlo simulation to evaluate the performance of these test procedures and interval estimators in a variety of situations. We use the data taken as a part of trial comparing two different doses of an analgesic with a placebo for the relief of primary dysmenorrhea to illustrate the use of the proposed test procedures and estimators.
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
Rotterdam, E.P.; Katan, M.B.; Knuiman, J.T.
1987-01-01
We studied intra-individual variation in total and high-density lipoprotein (HDL) cholesterol in healthy volunteers (22 men and 19 women, ages 19 to 62 years) on controlled natural diets. The within- person coefficient of variation (CV) depended on the interval between blood samples, increasing from
National Research Council Canada - National Science Library
Gillen, Jenna B; Gibala, Martin J
.... Recent studies, however, have revealed the potential for other models of HIIT, which may be more feasible but are still time-efficient, to stimulate adaptations similar to more demanding low-volume...
Directory of Open Access Journals (Sweden)
Beat eMeier
2013-10-01
Full Text Available The goal of this study was to investigate recognition memory performance across the lifespan and to determine how estimates of recollection and familiarity contribute to performance. In each of three experiments, participants from five groups from 14 up to 85 years of age (children, young adults, middle-aged adults, young-old adults and old-old adults were presented with high- and low-frequency words in a study phase and were tested immediately afterwards and/or after a one day retention interval. The results showed that word frequency and retention interval affected recognition memory performance as well as estimates of recollection and familiarity. Across the lifespan, the trajectory of recognition memory followed an inverse u-shape function that was neither affected by word frequency nor by retention interval. The trajectory of estimates of recollection also followed an inverse u-shape function, and was especially pronounced for low-frequency words. In contrast, estimates of familiarity did not differ across the lifespan. The results indicate that age differences in recognition memory are mainly due to differences in processes related to recollection while the contribution of familiarity-based processes seems to be age-invariant.
Essays on Estimation of Technical Efficiency and on Choice Under Uncertainty
Bhattacharyya, Aditi
2009-01-01
In the first two essays of this dissertation, I construct a dynamic stochastic production frontier incorporating the sluggish adjustment of inputs, measure the speed of adjustment of output in the short-run, and compare the technical efficiency estimates from such a dynamic model to those from a conventional static model that is based on the assumption that inputs are instantaneously adjustable in a production system. I provide estimation methods for technical efficiency of production units a...
Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling, Second Version
Kyungchul Song
2009-01-01
Nonrandom sampling schemes are often used in program evaluation settings to improve the quality of inference. This paper considers what we call treatment-based sampling, a type of standard stratified sampling where part of the strata are based on treatment status. This paper establishes semiparametric efficiency bounds for estimators of weighted average treatment effects and average treatment effects on the treated. This paper finds that adapting the efficient estimators of Hirano, Imbens, an...
Reinhard, S.; Lovell, C.A.K.; Thijssen, G.J.
2000-01-01
The objective of this paper is to estimate comprehensive environmental efficiency measures for Dutch dairy farms. The environmental efficiency scores are based on the nitrogen surplus, phosphate surplus and the total (direct and indirect) energy use of an unbalanced panel of dairy farms. We define
Directory of Open Access Journals (Sweden)
M. Sakthivel
2017-12-01
Full Text Available The genetic parameters of growth traits in the New Zealand White rabbits kept at Sheep Breeding and Research Station, Sandynallah, The Nilgiris, India were estimated by partitioning the variance and covariance components. The (covariance components of body weights at weaning (W42, post-weaning (W70 and marketing (W135 age and growth efficiency traits viz., average daily gain (ADG, relative growth rate (RGR and Kleiber ratio (KR estimated on a daily basis at different age intervals (42 to 70 d; 70 to 135 d and 42 to 135 d from weaning to marketing were estimated by restricted maximum likelihood, fitting 6 animal models with various combinations of direct and maternal effects. Data were collected over a period of 15 yr (1998 to 2012. A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for W42, W70 and W135 were 0.42±0.07, 0.40±0.08 and 0.27±0.07, respectively. Heritability estimates of growth efficiency traits were moderate to high (0.18 to 0.42. Of the total phenotypic variation, maternal genetic effect contributed 14 to 32% for early body weight traits (W42 and W70 and ADG1. The contribution of maternal permanent environmental effect varied from 6 to 18% for W42 and for all the growth efficiency traits except for KR2. Maternal permanent environmental effect on most of the growth efficiency traits was a carryover effect of maternal care during weaning. Direct maternal genetic correlations, for the traits in which maternal genetic effect was significant, were moderate to high in magnitude and negative in direction. Maternal effect declined as the age of the animal increased. The estimates of total heritability and maternal across year repeatability for growth traits were moderate and an optimum rate of genetic progress seems possible in the herd by mass selection. The genetic and phenotypic correlations among body weights
Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel
2010-01-30
In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.
Estimating survival of dental fillings on the basis of interval-censored data and multi-state models
DEFF Research Database (Denmark)
Joly, Pierre; Gerds, Thomas A; Qvist, Vibeke
2012-01-01
We aim to compare the life expectancy of a filling in a primary tooth between two types of treatments. We define the probabilities that a dental filling survives without complication until the permanent tooth erupts from beneath (exfoliation). We relate the time to exfoliation of the tooth...... with all these particularities, we propose to use a parametric four-state model with three random effects to take into account the hierarchical cluster structure. For inference, right and interval censoring as well as left truncation have to be dealt with. With the proposed approach, we can conclude...
Røraas, Thomas; Støve, Bård; Petersen, Per Hyltoft; Sandberg, Sverre
2017-05-01
Precise estimates of the within-person biological variation, CVI, can be essential both for monitoring patients and for setting analytical performance specifications. The confidence interval, CI, may be used to evaluate the reliability of an estimate, as it is a good measure of the uncertainty of the estimated CVI. The aim of the present study is to evaluate and establish methods for constructing a CI with the correct coverage probability and non-cover probability when estimating CVI. Data based on 3 models for distributions for the within-person effect were simulated to assess the performance of 3 methods for constructing confidence intervals; the formula based method for the nested ANOVA, the percentile bootstrap and the bootstrap-t methods. The performance of the evaluated methods for constructing a CI varied, both dependent on the size of the CVI and the type of distributions. The bootstrap-t CI have good and stable performance for the models evaluated, while the formula based are more distribution dependent. The percentile bootstrap performs poorly. CI is an essential part of estimation of the within-person biological variation. Good coverage probability and non-cover probabilities for CI are achievable by using the bootstrap-t combined with CV-ANOVA. Supplemental R-code is provided online. Copyright © 2017 Elsevier B.V. All rights reserved.
Rim, Chol Ho; Fu, Zhixin; Bao, Lei; Chen, Haide; Zhang, Dan; Luo, Qiong; Ri, Hak Chol; Huang, Hefeng; Luan, Zhidong; Zhang, Yan; Cui, Chun; Xiao, Lei; Jong, Ui Myong
2013-12-01
To improve the efficiency of producing cloned pigs, we investigated the influence of the number of transferred embryos, the culturing interval between nuclear transfer (NT) and embryo transfer, and the transfer pattern (single oviduct or double oviduct) on cloning efficiency. The results demonstrated that transfer of either 150-200 or more than 200NT embryos compared to transfer of 100-150 embryos resulted in a significantly higher pregnancy rate (48 ± 16, 50 ± 16 vs. 29 ± 5%, pcloning efficiency is achieved by adjusting the number and in vitro culture time of reconstructed embryos as well as the embryo transfer pattern. Copyright © 2013 Elsevier B.V. All rights reserved.
Energy-efficient power allocation of two-hop cooperative systems with imperfect channel estimation
Amin, Osama
2015-06-08
Recently, much attention has been paid to the green design of wireless communication systems using energy efficiency (EE) metrics that should capture all energy consumption sources to deliver the required data. In this paper, we formulate an accurate EE metric for cooperative two-hop systems that use the amplify-and-forward relaying scheme. Different from the existing research that assumes the availability of perfect channel state information (CSI) at the communication cooperative nodes, we assume a practical scenario, where training pilots are used to estimate the channels. The estimated CSI can be used to adapt the available resources of the proposed system in order to maximize the EE. Two estimation strategies are assumed namely disintegrated channel estimation, which assumes the availability of channel estimator at the relay, and cascaded channel estimation, where the relay is not equipped with channel estimator and only forwards the received pilot(s) in order to let the destination estimate the cooperative link. The channel estimation cost is reflected on the EE metric by including the estimation error in the signal-to-noise term and considering the energy consumption during the estimation phase. Based on the formulated EE metric, we propose an energy-aware power allocation algorithm to maximize the EE of the cooperative system with channel estimation. Furthermore, we study the impact of the estimation parameters on the optimized EE performance via simulation examples.
RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION
Directory of Open Access Journals (Sweden)
Archana V
2011-04-01
Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator
A novel estimating method for steering efficiency of the driver with electromyography signals
Liu, Yahui; Ji, Xuewu; Hayama, Ryouhei; Mizuno, Takahiro
2014-05-01
The existing research of steering efficiency mainly focuses on the mechanism efficiency of steering system, aiming at designing and optimizing the mechanism of steering system. In the development of assist steering system especially the evaluation of its comfort, the steering efficiency of driver physiological output usually are not considered, because this physiological output is difficult to measure or to estimate, and the objective evaluation of steering comfort therefore cannot be conducted with movement efficiency perspective. In order to take a further step to the objective evaluation of steering comfort, an estimating method for the steering efficiency of the driver was developed based on the research of the relationship between the steering force and muscle activity. First, the steering forces in the steering wheel plane and the electromyography (EMG) signals of the primary muscles were measured. These primary muscles are the muscles in shoulder and upper arm which mainly produced the steering torque, and their functions in steering maneuver were identified previously. Next, based on the multiple regressions of the steering force and EMG signals, both the effective steering force and the total force capacity of driver in steering maneuver were calculated. Finally, the steering efficiency of driver was estimated by means of the estimated effective force and the total force capacity, which represented the information of driver physiological output of the primary muscles. This research develops a novel estimating method for driver steering efficiency of driver physiological output, including the estimation of both steering force and the force capacity of primary muscles with EMG signals, and will benefit to evaluate the steering comfort with an objective perspective.
Directory of Open Access Journals (Sweden)
Xiaoping Li
2013-01-01
Full Text Available We present an efficient algorithm based on the robust Chinese remainder theorem (CRT to perform single frequency determination from multiple undersampled waveforms. The optimal estimate of common remainder in robust CRT, which plays an important role in the final frequency estimation, is first discussed. To avoid the exhausted searching in the optimal estimation, we then provide an improved algorithm with the same performance but less computation. Besides, the sufficient and necessary condition of the robust estimation was proposed. Numerical examples are also provided to verify the effectiveness of the proposed algorithm and related conclusions.
Pujol-Luz, José R; Francez, Pablo Abdon da Costa; Ururahy-Rodrigues, Alexandre; Constantino, Reginaldo
2008-03-01
The black soldier-fly (Hermetia illucens) is a generalist detritivore which is commonly present in corpses in later stages of decomposition and may be useful in forensic entomology. This paper describes the estimation of the postmortem interval (PMI) based on the life cycle of the black soldier-fly in a case in northern Brazil. A male child was abducted from his home and 42 days later his corpse was found in an advanced stage of decay. Two black soldier-fly larvae were found associated with the body. The larvae emerged as adults after 25-26 days. Considering the development cycle of H. illucens, the date of oviposition was estimated as 24-25 days after abduction. Since H. illucens usually (but not always) colonizes corpses in more advanced stages of decay, this estimate is consistent with the hypothesis that the child was killed immediately after abduction.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Yan, Ying; Zhou, Haibo; Cai, Jianwen
2017-09-01
The case-cohort study design is an effective way to reduce cost of assembling and measuring expensive covariates in large cohort studies. Recently, several weighted estimators were proposed for the case-cohort design when multiple diseases are of interest. However, these existing weighted estimators do not make effective use of the covariate information available in the whole cohort. Furthermore, the auxiliary information for the expensive covariates, which may be available in the studies, cannot be incorporated directly. In this article, we propose a class of updated-estimators. We show that, by making effective use of the whole cohort information, the proposed updated-estimators are guaranteed to be more efficient than the existing weighted estimators asymptotically. Furthermore, they are flexible to incorporate the auxiliary information whenever available. The advantages of the proposed updated-estimators are demonstrated in simulation studies and a real data analysis. © 2017, The International Biometric Society.
2015-01-01
The recent availability of high frequency data has permitted more efficient ways of computing volatility. However, estimation of volatility from asset price observations is challenging because observed high frequency data are generally affected by noise-microstructure effects. We address this issue by using the Fourier estimator of instantaneous volatility introduced in Malliavin and Mancino 2002. We prove a central limit theorem for this estimator with optimal rate and asymptotic variance. An extensive simulation study shows the accuracy of the spot volatility estimates obtained using the Fourier estimator and its robustness even in the presence of different microstructure noise specifications. An empirical analysis on high frequency data (U.S. S&P500 and FIB 30 indices) illustrates how the Fourier spot volatility estimates can be successfully used to study intraday variations of volatility and to predict intraday Value at Risk. PMID:26421617
Directory of Open Access Journals (Sweden)
Rik Crutzen
2017-07-01
Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.
Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith
2017-01-01
When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.
Estimation of energy efficiency of the process of osmotic dehydration of pork meat
Filipović, Vladimir; Ćurčić, Biljana; Nićetin, Milica; Knežević, Violeta; Lević, Ljubinko; Pezo, Lato
2014-01-01
Osmotic dehydration is a low-energy process since water removal from the raw material is without phase change. The goal of this research is to estimate energy efficiency of the process of osmotic dehydration of pork meat at three different process temperatures, in three different osmotic solutions and in co- and counter-current processes. In order to calculate energy efficiency of the process of osmotic dehydration, convective drying was used as a base process for comparison. Levels of the sa...
Estimating the carbon sequestration efficiency of ocean fertilization in ocean models
DeVries, T. J.; Primeau, F. W.; Deutsch, C. A.
2012-12-01
Fertilization of marine biota by direct addition of limiting nutrients, such as iron, has been widely discussed as a possible means of enhancing the oceanic uptake of anthropogenic CO2. Several startup companies have even proposed to offer carbon credits in exchange for fertilizing patches of ocean. However, spatial variability in ocean circulation and air-sea gas exchange causes large regional differences in the efficiency with which carbon can be sequestered in the ocean in response to ocean fertilization. Because of the long timescales associated with carbon sequestration in the ocean, this efficiency cannot be derived from field studies but must be estimated using ocean models. However, due to the computational burden of simulating the oceanic uptake of CO2 in response to ocean fertilization, modeling studies have focused on estimating the carbon sequestration efficiency at only a handful of locations throughout the ocean. Here we present a new method for estimating the carbon sequestration efficiency of ocean fertilization in ocean models. By appropriately linearizing the CO2 system chemistry, we can use the adjoint ocean transport model to efficiently probe the spatial structure of the sequestration efficiency. We apply the method to a global data-constrained ocean circulation model to estimate global patterns of sequestration efficiency at a horizontal resolution of 2 degrees. This calculation produces maps showing where carbon sequestration by ocean fertilization will be most effective. We also show how to rapidly compute the sensitivity of the carbon sequestration efficiency to the spatial pattern of the production and remineralization anomalies produced by ocean fertilization, and we explore these sensitivities in the data-constrained ocean circulation model.
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.
Fallahi, Ali Asghar; Shekarfroush, Shahnaz; Rahimi, Mostafa; Jalali, Amirhossain; Khoshbaten, Ali
2016-01-01
Objective(s): High-intensity interval training (HIIT) increases energy expenditure and mechanical energy efficiency. Although both uncoupling proteins (UCPs) and endothelial nitric oxide synthase (eNOS) affect the mechanical efficiency and antioxidant capacity, their effects are inverse. The aim of this study was to determine whether the alterations of cardiac UCP2, UCP3, and eNOS mRNA expression following HIIT are in favor of increased mechanical efficiency or decreased oxidative stress. Materials and Methods: Wistar rats were divided into five groups: control group (n=12), HIIT for an acute bout (AT1), short term HIIT for 3 and 5 sessions (ST3 and ST5), long-term training for 8 weeks (LT) (6 in each group). The rats of the training groups were made to run on a treadmill for 60 min in three stages: 6 min running for warm-up, 7 intervals of 7 min running on treadmill with a slope of 5° to 20° (4 min with an intensity of 80-110% VO2max and 3 min at 50-60% VO2max), and 5-min running for cool-down. The control group did not participate in any exercise program. Rats were sacrificed and the hearts were extracted to analyze the levels of UCP2, UCP3 and eNOS mRNA by RT-PCR. Results: UCP3 expression was increased significantly following an acute training bout. Repeated HIIT for 8 weeks resulted in a significant decrease in UCPs mRNA and a significant increase in eNOS expression in cardiac muscle. Conclusion: This study indicates that Long term HIIT through decreasing UCPs mRNA and increasing eNOS mRNA expression may enhance energy efficiency and physical performance. PMID:27114795
AN ESTIMATION OF TECHNICAL EFFICIENCY OF GARLIC PRODUCTION IN KHYBER PAKHTUNKHWA PAKISTAN
Directory of Open Access Journals (Sweden)
Nabeel Hussain
2014-04-01
Full Text Available This study was conducted to estimate the technical efficiency of farmers in garlic production in Khyber Pakhtunkhwa province, Pakistan. Data was randomly collected from 110 farmers using multistage sampling technique. Maximum likelihood estimation technique was used to estimate Cob-Douglas frontier production function. The analysis revealed that the estimated mean technical efficiency was 77 percent indicating that total output can be further increased with efficient use of resources and technology. The estimated gamma value was found to be 0.93 which shows 93% variation in garlic output due to inefficiency factors. The analysis further revealed that seed rate, tractor hours, fertilizer, FYM and weedicides were positive and statistically significant production factors. The results also show that age and education were statistically significant inefficiency factors, age having positive and education having negative relationship with the output of garlic. This study suggests that in order to increase the production of garlic by taking advantage of their high efficiency level, the government should invest in the research and development aspects for introducing good quality seeds to increase garlic productivity and should organize training programs to educate farmers about garlic production.
Massof, Robert W
2011-02-01
Modern psychometric theory is now routinely used in clinical vision research, as well as other areas of health research, to measure latent health states on continuous interval scales from responses to self-report rating scale questionnaires. Two competing theories are commonly employed: Rasch theory and item response theory. Because the field is currently in transition from using traditional scoring algorithms based on classical test theory to using the more modern approaches, this article offers a tutorial review of Rasch theory and item response theory and of the analytical methods employed by the two theories to estimate and validate measures.
Vincent, S; Vian, J M; Carlotti, M P
2000-07-01
The identification of insects found on a dead body can lead to the estimation of the time of death (postmortem interval). We report an updated version of an established method based on sequence analysis of PCR products from a region of the cytochrome b oxidase subunit I mitochondrial gene of different members of the family Calliphoridae, by sequencing six European species: Lucilia sericata (Meigen), Lucilia caesar (Linné), Lucilia illustris (Meigen), Calliphora vicina (Robineau-Desvody), Calliphora vomitoria (Linné), Protophormia terraenovae (Robineau-Desvody) and one Guianese species: Cochliomyia macellaria (Fabricius). This technique provided clear results when applied to the larvae and we also report the identification of empty puparia.
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
Directory of Open Access Journals (Sweden)
Soojeong Lee
2014-01-01
Full Text Available Confidence intervals (CIs are generally not provided along with estimated systolic blood pressure (SBP and diastolic blood pressure (DBP measured using oscillometric blood pressure devices. No criteria exist to determine the CI from a small sample set of oscillometric blood pressure measurements. We provide an extended methodology to improve estimation of CIs of SBP and DBP based on a nonparametric bootstrap-after-jackknife function and a Bayesian approach. We use the nonparametric bootstrap-after-jackknife function to reduce maximum amplitude outliers. Improved pseudomaximum amplitudes (PMAs and pseudoenvelopes (PEs are derived from the pseudomeasurements. Moreover, the proposed algorithm uses an unfixed ratio obtained by employing non-Gaussian models based on the Bayesian technique to estimate the SBP and DBP ratios for individual subjects. The CIs obtained through our proposed approach are narrower than those obtained using the traditional Student t-distribution method. The mean difference (MD and standard deviation (SD of the SBP and DBP estimates using our proposed approach are better than the estimates obtained by conventional fixed ratios based on the PMA and PE (PMAE.
Sparse and Efficient Estimation for Partial Spline Models with Increasing Dimension
Zhang, Hao Helen; Shang, Zuofeng
2014-01-01
We consider model selection and estimation for partial spline models and propose a new regularization method in the context of smoothing splines. The regularization method has a simple yet elegant form, consisting of roughness penalty on the nonparametric component and shrinkage penalty on the parametric components, which can achieve function smoothing and sparse estimation simultaneously. We establish the convergence rate and oracle properties of the estimator under weak regularity conditions. Remarkably, the estimated parametric components are sparse and efficient, and the nonparametric component can be estimated with the optimal rate. The procedure also has attractive computational properties. Using the representer theory of smoothing splines, we reformulate the objective function as a LASSO-type problem, enabling us to use the LARS algorithm to compute the solution path. We then extend the procedure to situations when the number of predictors increases with the sample size and investigate its asymptotic properties in that context. Finite-sample performance is illustrated by simulations. PMID:25620808
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators
2012-09-27
existing literature , we adopt the perspective that it is not enough for an estimator to be asymptotically efficient in the number of copies for fixed d. We... Puentes G, Walmsley I A and Lundeen J S 2010 Optimal experiment design for quantum state tomography: fair, precise and minimal tomography Phys. Rev. A
Shrinkage Estimators for Robust and Efficient Inference in Haplotype-Based Case-Control Studies
Chen, Yi-Hau
2009-03-01
Case-control association studies often aim to investigate the role of genes and gene-environment interactions in terms of the underlying haplotypes (i.e., the combinations of alleles at multiple genetic loci along chromosomal regions). The goal of this article is to develop robust but efficient approaches to the estimation of disease odds-ratio parameters associated with haplotypes and haplotype-environment interactions. We consider "shrinkage" estimation techniques that can adaptively relax the model assumptions of Hardy-Weinberg-Equilibrium and gene-environment independence required by recently proposed efficient "retrospective" methods. Our proposal involves first development of a novel retrospective approach to the analysis of case-control data, one that is robust to the nature of the gene-environment distribution in the underlying population. Next, it involves shrinkage of the robust retrospective estimator toward a more precise, but model-dependent, retrospective estimator using novel empirical Bayes and penalized regression techniques. Methods for variance estimation are proposed based on asymptotic theories. Simulations and two data examples illustrate both the robustness and efficiency of the proposed methods.
Firoozabadi, Reza; Helfenbein, Eric D; Babaeizadeh, Saeed
2017-08-18
The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is SDSD and RMSSD. Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings. Copyright © 2017 Elsevier Inc. All rights reserved.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
KDE-Track: An Efficient Dynamic Density Estimator for Data Streams
Qahtan, Abdulhakim Ali Ali
2016-11-08
Recent developments in sensors, global positioning system devices and smart phones have increased the availability of spatiotemporal data streams. Developing models for mining such streams is challenged by the huge amount of data that cannot be stored in the memory, the high arrival speed and the dynamic changes in the data distribution. Density estimation is an important technique in stream mining for a wide variety of applications. The construction of kernel density estimators is well studied and documented. However, existing techniques are either expensive or inaccurate and unable to capture the changes in the data distribution. In this paper, we present a method called KDE-Track to estimate the density of spatiotemporal data streams. KDE-Track can efficiently estimate the density function with linear time complexity using interpolation on a kernel model, which is incrementally updated upon the arrival of new samples from the stream. We also propose an accurate and efficient method for selecting the bandwidth value for the kernel density estimator, which increases its accuracy significantly. Both theoretical analysis and experimental validation show that KDE-Track outperforms a set of baseline methods on the estimation accuracy and computing time of complex density structures in data streams.
Directory of Open Access Journals (Sweden)
Roberto Gismondi
2014-01-01
Full Text Available In this context, supposing a sampling survey framework and a model-based approach, the attention has been focused on the main features of the optimal prediction strategy for a population mean, which implies knowledge of some model parameters and functions, normally unknown. In particular, a wrong specification of the model individual variances may lead to a serious loss of efficiency of estimates. For this reason, we have proposed some techniques for the estimation of model variances, which instead of being put equal to given a priori functions, can be estimated through historical data concerning past survey occasions. A time series of past observations is almost always available, especially in a longitudinal survey context. Usefulness of the technique proposed has been tested through an empirical attempt, concerning the quarterly wholesale trade survey carried out by ISTAT (Italian National Statistical Institute in the period 2005-2010. In this framework, the problem consists in minimising magnitude of revisions, given by the differences between preliminary estimates (based on the sub-sample of quick respondents and final estimates (which take into account late respondents as well. Main results show that modelvariances estimation through historical data lead to efficiency gains which cannot be neglected. This outcome was confirmed by a further exercise, based on 1000 random replications of late responses.
Directory of Open Access Journals (Sweden)
Bilqis Bolanle Amole,
2016-01-01
Full Text Available Health care services in Nigerian teaching hospitals have been considered as less desirable. In the same vein, studies on the proper application of model in explicating the factors that influence the efficiency of health care delivery are limited. This study therefore deployed Data Envelopment Analysis in estimating health care efficiency in six public teaching hospitals located in southwest Nigeria. To do this, the study gathered secondary data from annual statistical returns of six public teaching hospitals in southwest, Nigeria, spanned five years (2010 - 2014. The data collected were analysed using descriptive and inferential statistical tools. The Inferential statistical tools used included Data Envelopment Analysis (DEA with the aid of DEAP software version 2.1, Tobit model with the aid of STATA version 12.0. The results revealed that the teaching hospitals in Southwest Nigeria were not fully efficient. The average scale inefficiency was estimated to be approximately 18%. Result from the Tobit estimates showed that insufficient number of professional health workers, especially doctors, pharmacist and laboratory technicians engineers and beds space for patient use were responsible for the observed inefficiency in health care delivery, in southwest Nigeria. This study has implication for decisions on effective monitoring of the entire health system towards enhancing quality health care service delivery which would enhance health system efficiency.
LocExpress: a web server for efficiently estimating expression of novel transcripts.
Hou, Mei; Tian, Feng; Jiang, Shuai; Kong, Lei; Yang, Dechang; Gao, Ge
2016-12-22
The temporal and spatial-specific expression pattern of a transcript in multiple tissues and cell types can indicate key clues about its function. While several gene atlas available online as pre-computed databases for known gene models, it's still challenging to get expression profile for previously uncharacterized (i.e. novel) transcripts efficiently. Here we developed LocExpress, a web server for efficiently estimating expression of novel transcripts across multiple tissues and cell types in human (20 normal tissues/cells types and 14 cell lines) as well as in mouse (24 normal tissues/cell types and nine cell lines). As a wrapper to RNA-Seq quantification algorithm, LocExpress efficiently reduces the time cost by making abundance estimation calls increasingly within the minimum spanning bundle region of input transcripts. For a given novel gene model, such local context-oriented strategy allows LocExpress to estimate its FPKMs in hundreds of samples within minutes on a standard Linux box, making an online web server possible. To the best of our knowledge, LocExpress is the only web server to provide nearly real-time expression estimation for novel transcripts in common tissues and cell types. The server is publicly available at http://loc-express.cbi.pku.edu.cn .
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Efficient estimation of dynamic density functions with an application to outlier detection
Qahtan, Abdulhakim Ali Ali
2012-01-01
In this paper, we propose a new method to estimate the dynamic density over data streams, named KDE-Track as it is based on a conventional and widely used Kernel Density Estimation (KDE) method. KDE-Track can efficiently estimate the density with linear complexity by using interpolation on a kernel model, which is incrementally updated upon the arrival of streaming data. Both theoretical analysis and experimental validation show that KDE-Track outperforms traditional KDE and a baseline method Cluster-Kernels on estimation accuracy of the complex density structures in data streams, computing time and memory usage. KDE-Track is also demonstrated on timely catching the dynamic density of synthetic and real-world data. In addition, KDE-Track is used to accurately detect outliers in sensor data and compared with two existing methods developed for detecting outliers and cleaning sensor data. © 2012 ACM.
Program Potential: Estimates of Federal Energy Cost Savings from Energy Efficient Procurement
Energy Technology Data Exchange (ETDEWEB)
Taylor, Margaret [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-09-17
In 2011, energy used by federal buildings cost approximately $7 billion. Reducing federal energy use could help address several important national policy goals, including: (1) increased energy security; (2) lowered emissions of greenhouse gases and other air pollutants; (3) increased return on taxpayer dollars; and (4) increased private sector innovation in energy efficient technologies. This report estimates the impact of efficient product procurement on reducing the amount of wasted energy (and, therefore, wasted money) associated with federal buildings, as well as on reducing the needless greenhouse gas emissions associated with these buildings.
Efficient optimal joint channel estimation and data detection for massive MIMO systems
Alshamary, Haider Ali Jasim
2016-08-15
In this paper, we propose an efficient optimal joint channel estimation and data detection algorithm for massive MIMO wireless systems. Our algorithm is optimal in terms of the generalized likelihood ratio test (GLRT). For massive MIMO systems, we show that the expected complexity of our algorithm grows polynomially in the channel coherence time. Simulation results demonstrate significant performance gains of our algorithm compared with suboptimal non-coherent detection algorithms. To the best of our knowledge, this is the first algorithm which efficiently achieves GLRT-optimal non-coherent detections for massive MIMO systems with general constellations.
THE DESIGN OF AN INFORMATIC MODEL TO ESTIMATE THE EFFICIENCY OF AGRICULTURAL VEGETAL PRODUCTION
Directory of Open Access Journals (Sweden)
Cristina Mihaela VLAD
2013-12-01
Full Text Available In the present exists a concern over the inability of the small and medium farms managers to accurately estimate and evaluate production systems efficiency in Romanian agriculture. This general concern has become even more pressing as market prices associated with agricultural activities continue to increase. As a result, considerable research attention is now orientated to the development of economical models integrated in software interfaces that can improve the technical and financial management. Therefore, the objective of this paper is to present an estimation and evaluation model designed to increase the farmer’s ability to measure production activities costs by utilizing informatic systems.
Gutierrez, Mauricio; Brown, Kenneth
2015-03-01
Classical simulations of noisy stabilizer circuits are often used to estimate the threshold of a quantum error-correcting code (QECC). It is common to model the noise as a depolarizing Pauli channel. However, it is not clear how sensitive a code's threshold is to the noise model, and whether or not a depolarizing channel is a good approximation for realistic errors. We have shown that, at the physical single-qubit level, efficient and more accurate approximations can be obtained. We now examine the feasibility of employing these approximations to obtain better estimates of a QECC's threshold. We calculate the level-1 pseudo-threshold for the Steane [[7,1,3
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally......This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...
Double-Layer Compressive Sensing Based Efficient DOA Estimation in WSAN with Block Data Loss.
Sun, Peng; Wu, Liantao; Yu, Kai; Shao, Huajie; Wang, Zhi
2017-07-22
Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a catastrophic effect on DOA estimation performance in WSAN. In this paper, we propose a double-layer compressive sensing (CS) framework to eliminate the hazards of block data loss, to achieve high accuracy and efficient DOA estimation. In addition to modeling the random packet loss during transmission as a passive CS process, an active CS procedure is introduced at each array sensor to further enhance the robustness of transmission. Furthermore, to avoid the error propagation from signal recovery to DOA estimation in conventional methods, we propose a direct DOA estimation technique under the double-layer CS framework. Leveraging a joint frequency and spatial domain sparse representation of the sensor array data, the fusion center (FC) can directly obtain the DOA estimation results according to the received data packets, skipping the phase of signal recovery. Extensive simulations demonstrate that the double-layer CS framework can eliminate the adverse effects induced by block data loss and yield a superior DOA estimation performance in WSAN.
Directory of Open Access Journals (Sweden)
Courtney R. Weatherbee
2017-04-01
Full Text Available Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs. To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6 throughout decomposition. Expected accumulated degree hours (ADH associated with Cochliomyia macellaria and Phormia regina third instars (presence and length were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates.
Weatherbee, Courtney R; Pechal, Jennifer L; Stamper, Trevor; Benbow, M Eric
2017-04-04
Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m-7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates.
Energy Technology Data Exchange (ETDEWEB)
Lee, Sung Tae [Sungkyunkwan University, Seoul (Korea); Lee, Myunghun [Keimyung University, Taegu (Korea)
2001-03-01
This paper estimates the gasoline price elasticities of demand for automobile fuel efficiency in Korea to examine indirectly whether the government policy of raising fuel prices is effective in inducing less consumption of fuel, relying on a hedonic technique developed by Atkinson and Halvorsen (1984). One of the advantages of this technique is that the data for a single year, without involving variation in the price of gasoline, is sufficient in implementing this study. Moreover, this technique enables us to circumvent the multicollinearity problem, which had reduced reliability of the results in previous hedonic studies. The estimated elasticities of demand for fuel efficiency with respect to the price of gasoline, on average, is 0.42. (author). 30 refs., 3 tabs.
B. Bayram; GÜLER, O.; M. Yanar; O. Akbulut
2006-01-01
Data concerning body measurements, milk yield and body weights data were analysed on 101 of Holstein Friesian cows. Phenotypic correlations indicated positive significant relations between estimated feed efficiency (EFE) and milk yield as well as 4 % fat corrected milk yield, and between body measurements and milk yield. However, negative correlations were found between the EFE and body measurements indicating that the taller, longer, deeper and especially heavier cows were not to be efficien...
On the estimation stability of efficiency and economies of scale in microfinance institutions
Bolli, Thomas; Anh Vo Thi
2012-01-01
This paper uses a panel data set of microfinance institutions (MFI) across the world to compare several identification strategies of cost efficiency and economies of scale. Concretely, we contrast the non-parametric Data Envelopment Analysis (DEA) with the Stochastic Frontier Analysis (SFA) and a distribution-free identification based on time-invariant heterogeneity estimates. Furthermore, we analyze differences of production functions across regions and investigate the relevance of accountin...
Efficient Estimation of Average Treatment Effects under Treatment-Based Sampling
Kyungchul Song
2009-01-01
Nonrandom sampling schemes are often used in program evaluation settings to improve the quality of inference. This paper considers what we call treatment-based sampling, a type of standard stratified sampling where part of the strata are based on treatments. This paper first establishes semiparametric efficiency bounds for estimators of weighted average treatment effects and average treatment effects on the treated. In doing so, this paper illuminates the role of information about the aggrega...
Ng, Ka Ying Bonnie; Steer, Philip J
2016-06-01
To assess the relationship between gestational lengths of the first and second pregnancies in the same women. Observational study. We used information from a dataset of over 500,000 pregnancies from 15 maternity units in the North West Thames, London. Data on the gestational length in days of the first pregnancy and the gestational length in days of the second pregnancy were correlated using regression models. First and second pregnancies were ascribed to the same women by identical maternal date of birth, ethnicity and maternal height (to within ±3cm). There is a statistically significant cubic relationship between the gestational lengths of the first birth and the second birth (R 0.102, pdays than the first pregnancy. In the 20% of women who had an interpregnancy interval of less than one year, the next pregnancy was one day shorter for every three months less than 12. Although the gestation of second pregnancies exhibits regression towards the mean of 280 days, there is still a clinically important tendency for both preterm and postdates pregnancies to recur. Prediction of an estimated delivery date for second pregnancies should take into account both the length of the first pregnancy and the interpregnancy interval if it is less than 12 months. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.
Stephens, Alisa J.; Tchetgen Tchetgen, Eric J.; De Gruttola, Victor
2014-01-01
Semiparametric methods have been developed to increase efficiency of inferences in randomized trials by incorporating baseline covariates. Locally efficient estimators of marginal treatment effects, which achieve minimum variance under an assumed model, are available for settings in which outcomes are independent. The value of the pursuit of locally efficient estimators in other settings, such as when outcomes are multivariate, is often debated. We derive and evaluate semiparametric locally efficient estimators of marginal mean treatment effects when outcomes are correlated; such outcomes occur in randomized studies with clustered or repeated-measures responses. The resulting estimating equations modify existing generalized estimating equations (GEE) by identifying the efficient score under a mean model for marginal effects when data contain baseline covariates. Locally efficient estimators are implemented for longitudinal data with continuous outcomes and clustered data with binary outcomes. Methods are illustrated through application to AIDS Clinical Trial Group Study 398, a longitudinal randomized clinical trial that compared the effects of various protease inhibitors in HIV-positive subjects who had experienced antiretroviral therapy failure. In addition, extensive simulation studies characterize settings in which locally efficient estimators result in efficiency gains over suboptimal estimators and assess their feasibility in practice. Clinical trials; Correlated outcomes; Covariate adjustment; Semiparametric efficiency PMID:24566369
A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.
Brusco, Michael J; Steinley, Douglas
2012-02-01
There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.
Shen, Biyao; Zeng, Lijiang; Li, Lifeng
2016-10-20
We present an in situ duty cycle control method that relies on monitoring the TM/TE diffraction efficiency ratio of the -1st transmitted order during photoresist development. Owing to the anisotropic structure of a binary grating, at an appropriately chosen angle of incidence, diffraction efficiencies in TE and TM polarizations vary with groove depth proportionately, while they vary with duty cycle differently. Thus, measuring the TM/TE diffraction efficiency ratio can help estimate the duty cycle during development while eliminating the effect of photoresist thickness uncertainty. We experimentally verified the feasibility of this idea by fabricating photoresist gratings with different photoresist thicknesses. The experimental results were in good agreement with theoretical predictions.
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Estimating returns to scale and scale efficiency for energy consuming appliances
Energy Technology Data Exchange (ETDEWEB)
Blum, Helcio [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group; Okwelum, Edson O. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Efficiency Standards Group
2018-01-18
Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold. We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.
Szelecz, Ildikó; Lösch, Sandra; Seppey, Christophe V W; Lara, Enrique; Singer, David; Sorge, Franziska; Tschui, Joelle; Perotti, M Alejandra; Mitchell, Edward A D
2018-01-08
Criminal investigations of suspected murder cases require estimating the post-mortem interval (PMI, or time after death) which is challenging for long PMIs. Here we present the case of human remains found in a Swiss forest. We have used a multidisciplinary approach involving the analysis of bones and soil samples collected beneath the remains of the head, upper and lower body and "control" samples taken a few meters away. We analysed soil chemical characteristics, mites and nematodes (by microscopy) and micro-eukaryotes (by Illumina high throughput sequencing). The PMI estimate on hair 14C-data via bomb peak radiocarbon dating gave a time range of 1 to 3 years before the discovery of the remains. Cluster analyses for soil chemical constituents, nematodes, mites and micro-eukaryotes revealed two clusters 1) head and upper body and 2) lower body and controls. From mite evidence, we conclude that the body was probably brought to the site after death. However, chemical analyses, nematode community analyses and the analyses of micro-eukaryotes indicate that decomposition took place at least partly on site. This study illustrates the usefulness of combining several lines of evidence for the study of homicide cases to better calibrate PMI inference tools.
Mohr, Rachel M; Tomberlin, Jeffery K
2015-07-01
Understanding the onset and duration of adult blow fly activity is critical to accurately estimating the period of insect activity or minimum postmortem interval (minPMI). Few, if any, reliable techniques have been developed and consequently validated for using adult fly activity to determine a minPMI. In this study, adult blow flies (Diptera: Calliphoridae) of Cochliomyia macellaria and Chrysomya rufifacies were collected from swine carcasses in rural central Texas, USA, during summer 2008 and Phormia regina and Calliphora vicina in the winter during 2009 and 2010. Carcass attendance patterns of blow flies were related to species, sex, and oocyte development. Summer-active flies were found to arrive 4-12 h after initial carcass exposure, with both C. macellaria and C. rufifacies arriving within 2 h of one another. Winter-active flies arrived within 48 h of one another. There was significant difference in degree of oocyte development on each of the first 3 days postmortem. These frequency differences allowed a minPMI to be calculated using a binomial analysis. When validated with seven tests using domestic and feral swine and human remains, the technique correctly estimated time of placement in six trials.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Balanced Exploration and Exploitation Model search for efficient epipolar geometry estimation.
Goshen, Liran; Shimshoni, Ilan
2008-07-01
The estimation of the epipolar geometry is especially difficult when the putative correspondences include a low percentage of inlier correspondences and/or a large subset of the inliers is consistent with a degenerate configuration of the epipolar geometry that is totally incorrect. This work presents the Balanced Exploration and Exploitation Model Search (BEEM) algorithm that works very well especially for these difficult scenes. The algorithm handles these two problems in a unified manner. It includes the following main features: (1) Balanced use of three search techniques: global random exploration, local exploration near the current best solution and local exploitation to improve the quality of the model. (2) Exploits available prior information to accelerate the search process. (3) Uses the best found model to guide the search process, escape from degenerate models and to define an efficient stopping criterion. (4) Presents a simple and efficient method to estimate the epipolar geometry from two SIFT correspondences. (5) Uses the locality-sensitive hashing (LSH) approximate nearest neighbor algorithm for fast putative correspondences generation. The resulting algorithm when tested on real images with or without degenerate configurations gives quality estimations and achieves significant speedups compared to the state of the art algorithms.
Han, Wenhua; Xu, Jun; Wang, Ping; Tian, Guiyun
2014-06-12
In this paper, efficient managing particle swarm optimization (EMPSO) for high dimension problem is proposed to estimate defect profile from magnetic flux leakage (MFL) signal. In the proposed EMPSO, in order to strengthen exchange of information among particles, particle pair model was built. For more efficient searching when facing different landscapes of problems, velocity updating scheme including three velocity updating models was also proposed. In addition, for more chances to search optimum solution out, automatic particle selection for re-initialization was implemented. The optimization results of six benchmark functions show EMPSO performs well when optimizing 100-D problems. The defect simulation results demonstrate that the inversing technique based on EMPSO outperforms the one based on self-learning particle swarm optimizer (SLPSO), and the estimated profiles are still close to the desired profiles with the presence of low noise in MFL signal. The results estimated from real MFL signal by EMPSO-based inversing technique also indicate that the algorithm is capable of providing an accurate solution of the defect profile with real signal. Both the simulation results and experiment results show the computing time of the EMPSO-based inversing technique is reduced by 20%-30% than that of the SLPSO-based inversing technique.
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Bermejo, B. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)], E-mail: benito.hernandez@urjc.es; Marco-Blanco, J. [Departamento de Fisica, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain); Romance, M. [Departamento de Matematica Aplicada, Universidad Rey Juan Carlos, Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Calle Tulipan S/N, 28933-Mostoles-Madrid (Spain)
2009-02-23
Estimates for the efficiency of a tree are derived, leading to new analytical expressions for Barabasi-Albert trees efficiency. These expressions are used to investigate the dynamic behaviour of such networks. It is proved that the preferential attachment leads to an asymptotic conservation of efficiency as the Barabasi-Albert trees grow.
Cardozo, Gustavo G; Oliveira, Ricardo B; Farinatti, Paulo T V
2015-01-01
We tested the hypothesis that high intensity interval training (HIIT) would be more effective than moderate intensity continuous training (MIT) to improve newly emerged markers of cardiorespiratory fitness in coronary heart disease (CHD) patients, as the relationship between ventilation and carbon dioxide production (VE/VCO2 slope), oxygen uptake efficiency slope (OUES), and oxygen pulse (O2P). Seventy-one patients with optimized treatment were randomly assigned into HIIT (n = 23, age = 56 ± 12 years), MIT (n = 24, age = 62 ± 12 years), or nonexercise control group (CG) (n = 24, age = 64 ± 12 years). MIT performed 30 min of continuous aerobic exercise at 70-75% of maximal heart rate (HRmax), and HIIT performed 30 min sessions split in 2 min alternate bouts at 60%/90% HRmax (3 times/week for 16 weeks). No differences among groups (before versus after) were found for VE/VCO2 slope or OUES (P > 0.05). After training the O2P slope increased in HIIT (22%, P 0.05), while decreased in CG (-20%, P < 0.05) becoming lower versus HIIT (P = 0.03). HIIT was more effective than MIT for improving O2P slope in CHD patients, while VE/VCO2 slope and OUES were similarly improved by aerobic training regimens versus controls.
An efficient hidden variable approach to minimal-case camera motion estimation.
Hartley, Richard; Li, Hongdong
2012-12-01
In this paper, we present an efficient new approach for solving two-view minimal-case problems in camera motion estimation, most notably the so-called five-point relative orientation problem and the six-point focal-length problem. Our approach is based on the hidden variable technique used in solving multivariate polynomial systems. The resulting algorithm is conceptually simple, which involves a relaxation which replaces monomials in all but one of the variables to reduce the problem to the solution of sets of linear equations, as well as solving a polynomial eigenvalue problem (polyeig). To efficiently find the polynomial eigenvalues, we make novel use of several numeric techniques, which include quotient-free Gaussian elimination, Levinson-Durbin iteration, and also a dedicated root-polishing procedure. We have tested the approach on different minimal cases and extensions, with satisfactory results obtained. Both the executables and source codes of the proposed algorithms are made freely downloadable.
A geostatistical approach to estimate mining efficiency indicators with flexible meshes
Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2014-05-01
Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.
Energy efficiency estimation of a steam powered LNG tanker using normal operating data
Directory of Open Access Journals (Sweden)
Sinha Rajendra Prasad
2016-01-01
Full Text Available A ship’s energy efficiency performance is generally estimated by conducting special sea trials of few hours under very controlled environmental conditions of calm sea, standard draft and optimum trim. This indicator is then used as the benchmark for future reference of the ship’s Energy Efficiency Performance (EEP. In practice, however, for greater part of operating life the ship operates in conditions which are far removed from original sea trial conditions and therefore comparing energy performance with benchmark performance indicator is not truly valid. In such situations a higher fuel consumption reading from the ship fuel meter may not be a true indicator of poor machinery performance or dirty underwater hull. Most likely, the reasons for higher fuel consumption may lie in factors other than the condition of hull and machinery, such as head wind, current, low load operations or incorrect trim [1]. Thus a better and more accurate approach to determine energy efficiency of the ship attributable only to main machinery and underwater hull condition will be to filter out the influence of all spurious and non-standard operating conditions from the ship’s fuel consumption [2]. The author in this paper identifies parameters of a suitable filter to be used on the daily report data of a typical LNG tanker of 33000 kW shaft power to remove effects of spurious and non-standard ship operations on its fuel consumption. The filtered daily report data has been then used to estimate actual fuel efficiency of the ship and compared with the sea trials benchmark performance. Results obtained using data filter show closer agreement with the benchmark EEP than obtained from the monthly mini trials . The data filtering method proposed in this paper has the advantage of using the actual operational data of the ship and thus saving cost of conducting special sea trials to estimate ship EEP. The agreement between estimated results and special sea trials EEP is
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
SEBAL Model Using to Estimate Irrigation Water Efficiency & Water Requirement of Alfalfa Crop
Zeyliger, Anatoly; Ermolaeva, Olga
2013-04-01
The sustainability of irrigation is a complex and comprehensive undertaking, requiring an attention to much more than hydraulics, chemistry, and agronomy. A special combination of human, environmental, and economic factors exists in each irrigated region and must be recognized and evaluated. A way to evaluate the efficiency of irrigation water use for crop production is to consider the so-called crop-water production functions, which express the relation between the yield of a crop and the quantity of water applied to it or consumed by it. The term has been used in a somewhat ambiguous way. Some authors have defined the Crop-Water Production Functions between yield and the total amount of water applied, whereas others have defined it as a relation between yield and seasonal evapotranspiration (ET). In case of high efficiency of irrigation water use the volume of water applied is less than the potential evapotranspiration (PET), then - assuming no significant change of soil moisture storage from beginning of the growing season to its end-the volume of water may be roughly equal to ET. In other case of low efficiency of irrigation water use the volume of water applied exceeds PET, then the excess of volume of water applied over PET must go to either augmenting soil moisture storage (end-of-season moisture being greater than start-of-season soil moisture) or to runoff or/and deep percolation beyond the root zone. In presented contribution some results of a case study of estimation of biomass and leaf area index (LAI) for irrigated alfalfa by SEBAL algorithm will be discussed. The field study was conducted with aim to compare ground biomass of alfalfa at some irrigated fields (provided by agricultural farm) at Saratov and Volgograd Regions of Russia. The study was conducted during vegetation period of 2012 from April till September. All the operations from importing the data to calculation of the output data were carried by eLEAF company and uploaded in Fieldlook web
Relative Efficiency of ALS and InSAR for Biomass Estimation in a Tanzanian Rainforest
Directory of Open Access Journals (Sweden)
Endre Hofstad Hansen
2015-08-01
Full Text Available Forest inventories based on field sample surveys, supported by auxiliary remotely sensed data, have the potential to provide transparent and confident estimates of forest carbon stocks required in climate change mitigation schemes such as the REDD+ mechanism. The field plot size is of importance for the precision of carbon stock estimates, and better information of the relationship between plot size and precision can be useful in designing future inventories. Precision estimates of forest biomass estimates developed from 30 concentric field plots with sizes of 700, 900, …, 1900 m2, sampled in a Tanzanian rainforest, were assessed in a model-based inference framework. Remotely sensed data from airborne laser scanning (ALS and interferometric synthetic aperture radio detection and ranging (InSAR were used as auxiliary information. The findings indicate that larger field plots are relatively more efficient for inventories supported by remotely sensed ALS and InSAR data. A simulation showed that a pure field-based inventory would have to comprise 3.5–6.0 times as many observations for plot sizes of 700–1900 m2 to achieve the same precision as an inventory supported by ALS data.
Yebra, Marta; van Dijk, Albert
2015-04-01
Water use efficiency (WUE, the amount of transpiration or evapotranspiration per unit gross (GPP) or net CO2 uptake) is key in all areas of plant production and forest management applications. Therefore, mutually consistent estimates of GPP and transpiration are needed to analysed WUE without introducing any artefacts that might arise by combining independently derived GPP and ET estimates. GPP and transpiration are physiologically linked at ecosystem level by the canopy conductance (Gc). Estimates of Gc can be obtained by scaling stomatal conductance (Kelliher et al. 1995) or inferred from ecosystem level measurements of gas exchange (Baldocchi et al., 2008). To derive large-scale or indeed global estimates of Gc, satellite remote sensing based methods are needed. In a previous study, we used water vapour flux estimates derived from eddy covariance flux tower measurements at 16 Fluxnet sites world-wide to develop a method to estimate Gc using MODIS reflectance observations (Yebra et al. 2013). We combined those estimates with the Penman-Monteith combination equation to derive transpiration (T). The resulting T estimates compared favourably with flux tower estimates (R2=0.82, RMSE=29.8 W m-2). Moreover, the method allowed a single parameterisation for all land cover types, which avoids artefacts resulting from land cover classification. In subsequent research (Yebra et al, in preparation) we used the same satellite-derived Gc values within a process-based but simple canopy GPP model to constrain GPP predictions. The developed model uses a 'big-leaf' description of the plant canopy to estimate the mean GPP flux as the lesser of a conductance-limited and radiation-limited GPP rate. The conductance-limited rate was derived assuming that transport of CO2 from the bulk air to the intercellular leaf space is limited by molecular diffusion through the stomata. The radiation-limited rate was estimated assuming that it is proportional to the absorbed photosynthetically
The use of 32P and 15N to Estimate Fertilizer Efficiency in Oil Palm
Directory of Open Access Journals (Sweden)
Elsje L. Sisworo
2004-01-01
Full Text Available Oil palm has become an important commodity for Indonesia reaching an area of 2.6 million ha at the end of 1998. It is mostly cultivated in highly weathered acid soil usually Ultisols and Oxisols which are known for their low fertility, concerning the major nutrients like N and P. This study most conducted to search for the most active root-zone of oil palm and applied urea fertilizer at such soils to obtain high N-efficiency. Carrier free KH232PO4 solution was used to determine the active root-zone of oil palm by applying 32P around the plant in twenty holes. After the most active root-zone have been determined, urea in one, two and three splits were respectively applied at this zone. To estimate N-fertilizer efficiency of urea labelled 15N Ammonium Sulphate was used by adding them at the same amount of 16 g 15N plant-1. This study showed that the most active root-zone was found at a 1.5 m distance from the plant-stem and at 5 cm soil depth. For urea the highest N-efficiency was obtained from applying it at two splits. The use of 32P was able to distinguish several root zones: 1.5 m – 2.5 m from the plant-stem at a 5 cm and 15 cm soil depth. Urea placed at the most active root-zone, which was at a 1.5 m distance from the plant-stem and at a 5 cm depth in one, two, and three splits respectively showed difference N-efficiency. The highest N-efficiency of urea was obtained when applying it in two splits at the most active root-zone.
Hatzer-Grubwieser, P.; Bauer, C.; Parson, W.; Unterberger, S. H.; Kuhn, V.; Pemberger, N.; Pallua, Anton K.; Recheis, W.; Lackner, R.; Stalder, R.; Pallua, J. D.
2015-01-01
In this study different state-of-the-art visualization methods such as micro-computed tomography (micro-CT), mid-infrared (MIR) microscopic imaging and energy dispersive X-ray (EDS) mapping were evaluated to study human skeletal remains for the determination of the post-mortem interval (PMI). PMI specific features were identified and visualized by overlaying molecular imaging data and morphological tissue structures generated by radiological techniques and microscopic images gained from confocal microscopy (Infinite Focus (IFM)). In this way, a more distinct picture concerning processes during the PMI as well as a more realistic approximation of the PMI were achieved. It could be demonstrated that the gained result in combination with multivariate data analysis can be used to predict the Ca/C ratio and bone volume (BV) over total volume (TV) for PMI estimation. Statistical limitation of this study is the small sample size, and future work will be based on more specimens to develop a screening tool for PMI based on the outcome of this multidimensional approach. PMID:25878731
Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki
2017-12-01
The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
An estimation of the column efficiency made by analyzing tailing peak profiles.
Miyabe, Kanji; Matsumoto, Yuko; Niwa, Yusuke; Ando, Nobuho; Guiochon, Georges
2009-11-20
It has been shown previously that most columns are not radially homogeneous but exhibit radial distributions of the mobile phase flow velocity and the local efficiency. Both distributions are best approximated by fourth-order polynomial, with the velocity in the column center being maximum for most packed columns and minimum for monolithic columns. These distributions may be an important source of tailing of elution peaks. The numerical calculation of elution peaks shows how peak tailing is related to the characteristics of these two distributions. An approach is proposed that permits estimations of the true efficiency and of the degree of column radial heterogeneity by inversing this calculation and using the tailing profiles of the elution peaks that are experimentally measured. This method was applied in two concrete cases of tailing peak profiles that had been previously reported and were analyzed by applying this new inverse approach. The results obtained prove its validity and demonstrate that this numerical method is effective for deriving the true column efficiency from experimental tailing profiles.
Directory of Open Access Journals (Sweden)
Liu Jianhua
2010-05-01
Full Text Available Abstract Background DNA replication is a fundamental biological process during S phase of cell division. It is initiated from several hundreds of origins along whole chromosome with different firing efficiencies (or frequency of usage. Direct measurement of origin firing efficiency by techniques such as DNA combing are time-consuming and lack the ability to measure all origins. Recent genome-wide study of DNA replication approximated origin firing efficiency by indirectly measuring other quantities related to replication. However, these approximation methods do not reflect properties of origin firing and may lead to inappropriate estimations. Results In this paper, we develop a probabilistic model - Spanned Firing Time Model (SFTM to characterize DNA replication process. The proposed model reflects current understandings about DNA replication. Origins in an individual cell may initiate replication randomly within a time window, but the population average exhibits a temporal program with some origins replicated early and the others late. By estimating DNA origin firing time and fork moving velocity from genome-wide time-course S-phase copy number variation data, we could estimate firing efficiency of all origins. The estimated firing efficiency is correlated well with the previous studies in fission and budding yeasts. Conclusions The new probabilistic model enables sensitive identification of origins as well as genome-wide estimation of origin firing efficiency. We have successfully estimated firing efficiencies of all origins in S.cerevisiae, S.pombe and human chromosomes 21 and 22.
Hardie, L C; Armentano, L E; Shaver, R D; VandeHaar, M J; Spurlock, D M; Yao, C; Bertics, S J; Contreras-Govea, F E; Weigel, K A
2015-04-01
Prior to genomic selection on a trait, a reference population needs to be established to link marker genotypes with phenotypes. For costly and difficult-to-measure traits, international collaboration and sharing of data between disciplines may be necessary. Our aim was to characterize the combining of data from nutrition studies carried out under similar climate and management conditions to estimate genetic parameters for feed efficiency. Furthermore, we postulated that data from the experimental cohorts within these studies can be used to estimate the net energy of lactation (NE(L)) densities of diets, which can provide estimates of energy intakes for use in the calculation of the feed efficiency metric, residual feed intake (RFI), and potentially reduce the effect of variation in energy density of diets. Individual feed intakes and corresponding production and body measurements were obtained from 13 Midwestern nutrition experiments. Two measures of RFI were considered, RFI(Mcal) and RFI(kg), which involved the regression of NE(L )intake (Mcal/d) or dry matter intake (DMI; kg/d) on 3 expenditures: milk energy, energy gained or lost in body weight change, and energy for maintenance. In total, 677 records from 600 lactating cows between 50 and 275 d in milk were used. Cows were divided into 46 cohorts based on dietary or nondietary treatments as dictated by the nutrition experiments. The realized NE(L) densities of the diets (Mcal/kg of DMI) were estimated for each cohort by totaling the average daily energy used in the 3 expenditures for cohort members and dividing by the cohort's total average daily DMI. The NE(L) intake for each cow was then calculated by multiplying her DMI by her cohort's realized energy density. Mean energy density was 1.58 Mcal/kg. Heritability estimates for RFI(kg), and RFI(Mcal) in a single-trait animal model did not differ at 0.04 for both measures. Information about realized energy density could be useful in standardizing intake data from
Xu, Huihui; Jiang, Mingyan
2015-07-01
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of
Efficient Estimation of Dynamic Density Functions with Applications in Streaming Data
Qahtan, Abdulhakim
2016-05-11
Recent advances in computing technology allow for collecting vast amount of data that arrive continuously in the form of streams. Mining data streams is challenged by the speed and volume of the arriving data. Furthermore, the underlying distribution of the data changes over the time in unpredicted scenarios. To reduce the computational cost, data streams are often studied in forms of condensed representation, e.g., Probability Density Function (PDF). This thesis aims at developing an online density estimator that builds a model called KDE-Track for characterizing the dynamic density of the data streams. KDE-Track estimates the PDF of the stream at a set of resampling points and uses interpolation to estimate the density at any given point. To reduce the interpolation error and computational complexity, we introduce adaptive resampling where more/less resampling points are used in high/low curved regions of the PDF. The PDF values at the resampling points are updated online to provide up-to-date model of the data stream. Comparing with other existing online density estimators, KDE-Track is often more accurate (as reflected by smaller error values) and more computationally efficient (as reflected by shorter running time). The anytime available PDF estimated by KDE-Track can be applied for visualizing the dynamic density of data streams, outlier detection and change detection in data streams. In this thesis work, the first application is to visualize the taxi traffic volume in New York city. Utilizing KDE-Track allows for visualizing and monitoring the traffic flow on real time without extra overhead and provides insight analysis of the pick up demand that can be utilized by service providers to improve service availability. The second application is to detect outliers in data streams from sensor networks based on the estimated PDF. The method detects outliers accurately and outperforms baseline methods designed for detecting and cleaning outliers in sensor data. The
Thekkoot, D M; Kemp, R A; Rothschild, M F; Plastow, G S; Dekkers, J C M
2016-11-01
Increased milk production due to high litter size, coupled with low feed intake, results in excessive mobilization of sow body reserves during lactation, which can have detrimental effects on future reproductive performance. A possibility to prevent this is to improve sow lactation performance genetically, along with other traits of interest. The aim of this study was to estimate breed-specific genetic parameters (by parity, between parities, and across parities) for traits associated with lactation and reproduction in Yorkshire and Landrace sows. Performance data were available for 2,107 sows with 1 to 3 parities (3,424 farrowings total). Sow back fat, loin depth and BW at farrowing, sow feed intake (SFI), and body weight loss (BWL) during lactation showed moderate heritabilities (0.21 to 0.37) in both breeds, whereas back fat loss (BFL), loin depth loss (LDL), and litter weight gain (LWG) showed low heritabilities (0.12 to 0.18). Among the efficiency traits, sow lactation efficiency showed extremely low heritability (near zero) in Yorkshire sows but a slightly higher (0.05) estimate in Landrace sows, whereas sow residual feed intake (SRFI) and energy balance traits showed moderate heritabilities in both breeds. Genetic correlations indicated that SFI during lactation had strong negative genetic correlations with body resource mobilization traits (BWL, BFL, and LDL; -0.35 to -0.70), and tissue mobilization traits in turn had strong positive genetic correlations with LWG (+0.24 to +0.54; < 0.05). However, SFI did not have a significant genetic correlation with LWG. These genetic correlations suggest that SFI during lactation is predominantly used for reducing sow body tissue losses, rather than for milk production. Estimates of genetic correlations for the same trait measured in parities 1 and 2 ranged from 0.64 to 0.98, which suggests that first and later parities should be treated as genetically different for some traits. Genetic correlations estimated between
Buttazzoni, L; Mao, I L
1989-03-01
Net efficiencies of converting intake energy into energy for maintenance, milk production, and body weight change in a lactation were estimated for each of 79 Holstein cows by a two-stage multiple regression model. Cows were from 16 paternal half-sib families, which each had members in at least two of the six herds. Each cow was recorded for milk yield, net energy intake, and three efficiency traits. These were analyzed in a multitrait model containing the same 14 fixed subclasses of herd by season by parity and a random factor of sires for each of the five traits. Restricted maximum likelihood estimates of sire and residual (co)variance components were obtained by an expectation maximization algorithm with canonical transformations. Between milk yield and net energy intake, net energy efficiencies for milk yield, maintenance, and body weight change, the estimated phenotypic correlations were .36, -.02, .08, and -.06, while the genetic correlations were .92, .56, .02, and -.32, respectively. Both genetic and phenotypic correlations were zero between net energy efficiency of maintenance and that of milk yield and .17 between net energy efficiency of body weight change and that of milk yield. The estimated genetic correlation between net efficiency for lactation and milk yield is approximately 60% of that between gross efficiency and milk yield. With a heritability of .32 equivalent.49, net energy efficiency for milk yield may be worth consideration for genetic selection in certain dairy cattle populations.
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Directory of Open Access Journals (Sweden)
A. A. Heidari
2017-09-01
Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Improved barometric and loading efficiency estimates using packers in monitoring wells
Cook, Scott B.; Timms, Wendy A.; Kelly, Bryce F. J.; Barbour, S. Lee
2017-08-01
Measurement of barometric efficiency (BE) from open monitoring wells or loading efficiency (LE) from formation pore pressures provides valuable information about the hydraulic properties and confinement of a formation. Drained compressibility ( α) can be calculated from LE (or BE) in confined and semi-confined formations and used to calculate specific storage ( S s). S s and α are important for predicting the effects of groundwater extraction and therefore for sustainable extraction management. However, in low hydraulic conductivity ( K) formations or large diameter monitoring wells, time lags caused by well storage may be so long that BE cannot be properly assessed in open monitoring wells in confined or unconfined settings. This study demonstrates the use of packers to reduce monitoring-well time lags and enable reliable assessments of LE. In one example from a confined, high- K formation, estimates of BE in the open monitoring well were in good agreement with shut-in LE estimates. In a second example, from a low- K confining clay layer, BE could not be adequately assessed in the open monitoring well due to time lag. Sealing the monitoring well with a packer reduced the time lag sufficiently that a reliable assessment of LE could be made from a 24-day monitoring period. The shut-in response confirmed confined conditions at the well screen and provided confidence in the assessment of hydraulic parameters. A short (time-lag-dependent) period of high-frequency shut-in monitoring can therefore enhance understanding of hydrogeological systems and potentially provide hydraulic parameters to improve conceptual/numerical groundwater models.
Hagen, David R; Tidor, Bruce
2015-02-01
A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.
MADRE, JL; YAMAMOTO, T; NAKAGAWA, N; KITAMURA, R
2004-01-01
Hasard-based duration models have been applied in transportation research field to represent the choice or the event along the time dimension. Simulation analysis is carried out in this study to examine the efficiency of non-parametric estimation of the baseline hazard function in comparison with parametric estimation when the distribution is correctly assumed.
FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems
Sundaramoorthi, Ganesh
2014-06-01
We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.
Dmitruk, I.; Shynkarenko, Ye; Dmytruk, A.; Aleksiuk, D.; Kadan, V.; Korenyuk, P.; Zubrilin, N.; Blonskiy, I.
2016-12-01
We report experience of assembling an optical Kerr gate setup at the Femtosecond Laser Center for collective use at the Institute of Physics of the National Academy of Sciences of Ukraine. This offers an inexpensive solution to the problem of time-resolved luminescence spectroscopy. Practical aspects of its design and alignment are discussed and its main characteristics are evaluated. Theoretical analysis and numerical estimates are performed to evaluate the efficiency and the response time of an optical Kerr gate setup for fluorescence spectroscopy with subpicosecond time resolution. The theoretically calculated efficiency is compared with the experimentally measured one of ~12% for Crown 5 glass and ~2% for fused silica. Other characteristics of the Kerr gate are analyzed and ways to improve them are discussed. A method of compensation for the refractive index dispersion in a Kerr gate medium is suggested. Examples of the application of the optical Kerr gate setup for measurements of the time-resolved luminescence of Astra Phloxine and Coumarin 30 dyes and both linear and nonlinear chirp parameters of a supercontinuum are presented.
Directory of Open Access Journals (Sweden)
Bethan E. Phillips
2017-09-01
Full Text Available IntroductionRegular physical activity (PA can reduce the risk of developing type 2 diabetes, but adherence to time-orientated (150 min week−1 or more PA guidelines is very poor. A practical and time-efficient PA regime that was equally efficacious at controlling risk factors for cardio-metabolic disease is one solution to this problem. Herein, we evaluate a new time-efficient and genuinely practical high-intensity interval training (HIT protocol in men and women with pre-existing risk factors for type 2 diabetes.Materials and methodsOne hundred eighty-nine sedentary women (n = 101 and men (n = 88 with impaired glucose tolerance and/or a body mass index >27 kg m−2 [mean (range age: 36 (18–53 years] participated in this multi-center study. Each completed a fully supervised 6-week HIT protocol at work-loads equivalent to ~100 or ~125% V˙O2 max. Change in V˙O2 max was used to monitor protocol efficacy, while Actiheart™ monitors were used to determine PA during four, weeklong, periods. Mean arterial (blood pressure (MAP and fasting insulin resistance [homeostatic model assessment (HOMA-IR] represent key health biomarker outcomes.ResultsThe higher intensity bouts (~125% V˙O2 max used during a 5-by-1 min HIT protocol resulted in a robust increase in V˙O2 max (136 participants, +10.0%, p < 0.001; large size effect. 5-by-1 HIT reduced MAP (~3%; p < 0.001 and HOMA-IR (~16%; p < 0.01. Physiological responses were similar in men and women while a sizeable proportion of the training-induced changes in V˙O2 max, MAP, and HOMA-IR was retained 3 weeks after cessation of training. The supervised HIT sessions accounted for the entire quantifiable increase in PA, and this equated to 400 metabolic equivalent (MET min week−1. Meta-analysis indicated that 5-by-1 HIT matched the efficacy and variability of a time-consuming 30-week PA program on V˙O2 max, MAP, and HOMA-IR.ConclusionWith a total time-commitment of
Betowski, Don; Bevington, Charles; Allison, Thomas C
2016-01-19
Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.
Directory of Open Access Journals (Sweden)
David Simoncini
Full Text Available Fragment assembly is a powerful method of protein structure prediction that builds protein models from a pool of candidate fragments taken from known structures. Stochastic sampling is subsequently used to refine the models. The structures are first represented as coarse-grained models and then as all-atom models for computational efficiency. Many models have to be generated independently due to the stochastic nature of the sampling methods used to search for the global minimum in a complex energy landscape. In this paper we present EdaFold(AA, a fragment-based approach which shares information between the generated models and steers the search towards native-like regions. A distribution over fragments is estimated from a pool of low energy all-atom models. This iteratively-refined distribution is used to guide the selection of fragments during the building of models for subsequent rounds of structure prediction. The use of an estimation of distribution algorithm enabled EdaFold(AA to reach lower energy levels and to generate a higher percentage of near-native models. [Formula: see text] uses an all-atom energy function and produces models with atomic resolution. We observed an improvement in energy-driven blind selection of models on a benchmark of EdaFold(AA in comparison with the [Formula: see text] AbInitioRelax protocol.
Towards the Estimation of an Efficient Benchmark Portfolio: The Case of Croatian Emerging Market
Directory of Open Access Journals (Sweden)
Dolinar Denis
2017-04-01
Full Text Available The fact that cap-weighted indices provide an inefficient risk-return trade-off is well known today. Various research approaches evolved suggesting alternative to cap-weighting in an effort to come up with a more efficient market index benchmark. In this paper we aim to use such an approach and focus on the Croatian capital market. We apply statistical shrinkage method suggested by Ledoit and Wolf (2004 to estimate the covariance matrix and follow the work of Amenc et al. (2011 to obtain estimates of expected returns that rely on risk-return trade-off. Empirical findings for the proposed portfolio optimization include out-of-sample and robustness testing. This way we compare the performance of the capital-weighted benchmark to the alternative and ensure that consistency is achieved in different volatility environments. Research findings do not seem to support relevant research results for the developed markets but rather complement earlier research (Zoričić et al., 2014.
Manipulating decay time for efficient large-mammal density estimation: gorillas and dung height.
Kuehl, Hjalmar S; Todd, Angelique; Boesch, Christophe; Walsh, Peter D
2007-12-01
Large-mammal surveys often rely on indirect signs such as dung or nests. Sign density is usually translated into animal density using sign production and decay rates. In principle, such auxiliary variable estimates should be made in a spatially unbiased manner. However, traditional decay rate estimation methods entail following many signs from production to disappearance, which, in large study areas, requires extensive travel effort. Consequently, decay rate estimates have tended to be made instead at some convenient but unrepresentative location. In this study we evaluated how much bias might be induced by extrapolating decay rates from unrepresentative locations, how much effort would be required to implement current methods in a spatially unbiased manner, and what alternate approaches might be used to improve precision. To evaluate the extent of bias induced by unrepresentative sampling, we collected data on gorilla dung at several central African sites. Variation in gorilla dung decay rate was enormous, varying by up to an order of magnitude within and between survey zones. We then estimated what the effort-precision relationship would be for a previously suggested "retrospective" decay rate (RDR) method, if it were implemented in a spatially unbiased manner. We also evaluated precision for a marked sign count (MSC) approach that does not use a decay rate. Because they require repeat visits to remote locations, both RDR and MSC require enormous effort levels in order to gain precise density estimates. Finally, we examined an objective criterion for decay (i.e., dung height). This showed great potential for improving RDR efficiency because choosing a high threshold height for decay reduces decay time and, consequently, the number of visits that need to be made to remote areas. The ability to adjust decay time using an objective decay criterion also opens up the potential for a "prospective" decay rate (PDR) approach. Further research is necessary to evaluate
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis.
Markiewicz, P J; Thielemans, K; Schott, J M; Atkinson, D; Arridge, S R; Hutton, B F; Ourselin, S
2016-07-07
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of (18)F-florbetapir using the Siemens Biograph mMR scanner.
DEFF Research Database (Denmark)
Henningsen, Arne; Fabricius, Ole; Olsen, Jakob Vesterlund
2014-01-01
Based on a theoretical microeconomic model, we econometrically estimate investment utilization, adjustment costs, and technical efficiency in Danish pig farms based on a large unbalanced panel dataset. As our theoretical model indicates that adjustment costs are caused both by increased inputs...... and by reduced outputs, we estimate hyperbolic distance functions that account for reduced technical efficiency both in terms of increased inputs and reduced outputs. We estimate these hyperbolic distance functions as “efficiency effect frontiers” with the Translog functional form and a dynamic specification...... of investment activities by the maximum likelihood method so that we can estimate the adjustment costs that occur in the year of the investment and the three following years. Our results show that investments are associated with significant adjustment costs, especially in the year in which the investment...
Directory of Open Access Journals (Sweden)
Müller Kai F
2005-10-01
Full Text Available Abstract Background For parsimony analyses, the most common way to estimate confidence is by resampling plans (nonparametric bootstrap, jackknife, and Bremer support (Decay indices. The recent literature reveals that parameter settings that are quite commonly employed are not those that are recommended by theoretical considerations and by previous empirical studies. The optimal search strategy to be applied during resampling was previously addressed solely via standard search strategies available in PAUP*. The question of a compromise between search extensiveness and improved support accuracy for Bremer support received even less attention. A set of experiments was conducted on different datasets to find an empirical cut-off point at which increased search extensiveness does not significantly change Bremer support and jackknife or bootstrap proportions any more. Results For the number of replicates needed for accurate estimates of support in resampling plans, a diagram is provided that helps to address the question whether apparently different support values really differ significantly. It is shown that the use of random addition cycles and parsimony ratchet iterations during bootstrapping does not translate into higher support, nor does any extension of the search extensiveness beyond the rather moderate effort of TBR (tree bisection and reconnection branch swapping plus saving one tree per replicate. Instead, in case of very large matrices, saving more than one shortest tree per iteration and using a strict consensus tree of these yields decreased support compared to saving only one tree. This can be interpreted as a small risk of overestimating support but should be more than compensated by other factors that counteract an enhanced type I error. With regard to Bremer support, a rule of thumb can be derived stating that not much is gained relative to the surplus computational effort when searches are extended beyond 20 ratchet iterations per
Lemaire, Patrick; Brun, Fleur
2014-07-01
The present study investigates how children's better strategy selection and strategy execution on a given problem are influenced by which strategy was used on the immediately preceding problem and by the duration between their answer to the previous problem and current problem display. These goals are pursued in the context of an arithmetic problem solving task. Third and fifth graders were asked to select the better strategy to find estimates to two-digit addition problems like 36 + 78. On each problem, children could choose rounding-down (i.e., rounding both operands down to the closest smaller decades, like doing 40 + 60 to solve 42 + 67) or rounding-up strategies (i.e., rounding both operands up to the closest larger decades, like doing 50 + 70 to solve 42 + 67). Children were tested under a short RSI condition (i.e., the next problem was displayed 900 ms after participants' answer) or under a long RSI condition (i.e., the next problem was displayed 1,900 ms after participants' answer). Results showed that both strategy selection (e.g., children selected the better strategy more often under long RSI condition and after selecting the poorer strategy on the immediately preceding problem) and strategy execution (e.g., children executed strategy more efficiently under long RSI condition and were slower when switching strategy over two consecutive problems) were influenced by RSI and which strategy was used on the immediately preceding problem. Moreover, data showed age-related changes in effects of RSI and strategy sequence on mean percent better strategy selection and on strategy performance. The present findings have important theoretical and empirical implications for our understanding of general and specific processes involved in strategy selection, strategy execution, and strategic development.
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided. © 2015, The International Biometric Society.
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Optimal prediction intervals of wind power generation
DEFF Research Database (Denmark)
Wan, Can; Wu, Zhao; Pinson, Pierre
2014-01-01
Accurate and reliable wind power forecasting is essential to power system operation. Given significant uncertainties involved in wind generation, probabilistic interval forecasting provides a unique solution to estimate and quantify the potential impacts and risks facing system operation with wind...... penetration beforehand. This paper proposes a novel hybrid intelligent algorithm approach to directly formulate optimal prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization. Prediction intervals with Associated confidence levels are generated through...... conducted. Comparing with benchmarks applied, experimental results demonstrate the high efficiency and reliability of the developed approach. It is therefore convinced that the proposed method provides a new generalized framework for probabilistic wind power forecasting with high reliability and flexibility...
Directory of Open Access Journals (Sweden)
Kazuki Maruta
2016-07-01
Full Text Available Drastic improvements in transmission rate and system capacity are required towards 5th generation mobile communications (5G. One promising approach, utilizing the millimeter wave band for its rich spectrum resources, suffers area coverage shortfalls due to its large propagation loss. Fortunately, massive multiple-input multiple-output (MIMO can offset this shortfall as well as offer high order spatial multiplexing gain. Multiuser MIMO is also effective in further enhancing system capacity by multiplexing spatially de-correlated users. However, the transmission performance of multiuser MIMO is strongly degraded by channel time variation, which causes inter-user interference since null steering must be performed at the transmitter. This paper first addresses the effectiveness of multiuser massive MIMO transmission that exploits the first eigenmode for each user. In Line-of-Sight (LoS dominant channel environments, the first eigenmode is chiefly formed by the LoS component, which is highly correlated with user movement. Therefore, the first eigenmode provided by a large antenna array can improve the robustness against the channel time variation. In addition, we propose a simplified beamforming scheme based on high efficient channel state information (CSI estimation that extracts the LoS component. We also show that this approximate beamforming can achieve throughput performance comparable to that of the rigorous first eigenmode transmission. Our proposed multiuser massive MIMO scheme can open the door for practical millimeter wave communication with enhanced system capacity.
An Efficient Method for Estimating the Hydrodynamic Radius of Disordered Protein Conformations.
Nygaard, Mads; Kragelund, Birthe B; Papaleo, Elena; Lindorff-Larsen, Kresten
2017-08-08
Intrinsically disordered proteins play important roles throughout biology, yet our understanding of the relationship between their sequences, structural properties, and functions remains incomplete. The dynamic nature of these proteins, however, makes them difficult to characterize structurally. Many disordered proteins can attain both compact and expanded conformations, and the level of expansion may be regulated and important for function. Experimentally, the level of compaction and shape is often determined either by small-angle x-ray scattering experiments or pulsed-field-gradient NMR diffusion measurements, which provide ensemble-averaged estimates of the radius of gyration and hydrodynamic radius, respectively. Often, these experiments are interpreted using molecular simulations or are used to validate them. We here provide, to our knowledge, a new and efficient method to calculate the hydrodynamic radius of a disordered protein chain from a model of its structural ensemble. In particular, starting from basic concepts in polymer physics, we derive a relationship between the radius of gyration of a structure and its hydrodynamic ratio, which in turn can be used, for example, to compare a simulated ensemble of conformations to NMR diffusion measurements. The relationship may also be valuable when using NMR diffusion measurements to restrain molecular simulations. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Directory of Open Access Journals (Sweden)
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
DEFF Research Database (Denmark)
Asayama, Kei; Thijs, Lutgarde; Li, Yan
2014-01-01
Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥14...
Asayama, Kei; Thijs, Lutgarde; Li, Yan; Gu, Yu-Mei; Hara, Azusa; Liu, Yan-Ping; Zhang, Zhenyu; Wei, Fang-Fei; Lujambio, Inés; Mena, Luis J; Boggia, José; Hansen, Tine W; Björklund-Bodegård, Kristina; Nomura, Kyoko; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Stolarz-Skrzypek, Katarzyna; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Luzardo, Leonella; Kawecka-Jaszcz, Kalina; Sandoya, Edgardo; Filipovský, Jan; Maestre, Gladys E; Wang, Jiguang; Imai, Yutaka; Franklin, Stanley S; O'Brien, Eoin; Staessen, Jan A
2014-11-01
Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥140/≥90, ≥130/≥80, ≥135/≥85, and ≥120/≥70 mm Hg as hypertension thresholds for conventional, 24-hour, daytime, and nighttime blood pressure. White-coat hypertension was hypertension on conventional measurement with ambulatory normotension, the opposite condition being masked hypertension. Intervals used for classification of participants were daytime, nighttime, and 24 hours, first considered separately, and next combined as 24 hours plus daytime or plus nighttime, or plus both. Depending on time intervals chosen, white-coat and masked hypertension frequencies ranged from 6.3% to 12.5% and from 9.7% to 19.6%, respectively. During 91 046 person-years, 729 participants experienced a cardiovascular event. In multivariable analyses with normotension during all intervals of the day as reference, hazard ratios associated with white-coat hypertension progressively weakened considering daytime only (1.38; P=0.033), nighttime only (1.43; P=0.0074), 24 hours only (1.21; P=0.20), 24 hours plus daytime (1.24; P=0.18), 24 hours plus nighttime (1.15; P=0.39), and 24 hours plus daytime and nighttime (1.16; P=0.41). The hazard ratios comparing masked hypertension with normotension were all significant (Phypertension requires setting thresholds simultaneously to 24 hours, daytime, and nighttime blood pressure. Although any time interval suffices to diagnose masked hypertension, as proposed in current guidelines, full 24-hour recordings remain standard in clinical practice. © 2014 American Heart Association, Inc.
Asayama, Kei; Thijs, Lutgarde; Li, Yan; Gu, Yu-Mei; Hara, Azusa; Liu, Yan-Ping; Zhang, Zhenyu; Wei, Fang-Fei; Lujambio, Inés; Mena, Luis J.; Boggia, José; Hansen, Tine W.; Björklund-Bodegård, Kristina; Nomura, Kyoko; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Stolarz-Skrzypek, Katarzyna; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Luzardo, Leonella; Kawecka-Jaszcz, Kalina; Sandoya, Edgardo; Filipovský, Jan; Maestre, Gladys E.; Wang, Jiguang; Imai, Yutaka; Franklin, Stanley S.; O’Brien, Eoin; Staessen, Jan A.
2015-01-01
Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥140/≥90, ≥130/≥80, ≥135/≥85, and ≥120/≥70 mm Hg as hypertension thresholds for conventional, 24-hour, daytime, and nighttime blood pressure. White-coat hypertension was hypertension on conventional measurement with ambulatory normotension, the opposite condition being masked hypertension. Intervals used for classification of participants were daytime, nighttime, and 24 hours, first considered separately, and next combined as 24 hours plus daytime or plus nighttime, or plus both. Depending on time intervals chosen, white-coat and masked hypertension frequencies ranged from 6.3% to 12.5% and from 9.7% to 19.6%, respectively. During 91 046 person-years, 729 participants experienced a cardiovascular event. In multivariable analyses with normotension during all intervals of the day as reference, hazard ratios associated with white-coat hypertension progressively weakened considering daytime only (1.38; P=0.033), nighttime only (1.43; P=0.0074), 24 hours only (1.21; P=0.20), 24 hours plus daytime (1.24; P=0.18), 24 hours plus nighttime (1.15; P=0.39), and 24 hours plus daytime and nighttime (1.16; P=0.41). The hazard ratios comparing masked hypertension with normotension were all significant (Phypertension requires setting thresholds simultaneously to 24 hours, daytime, and nighttime blood pressure. Although any time interval suffices to diagnose masked hypertension, as proposed in current guidelines, full 24-hour recordings remain standard in clinical practice. PMID:25135185
Rosati, A; Dejong, T M
2003-06-01
It has been theorized that photosynthetic radiation use efficiency (PhRUE) over the course of a day is constant for leaves throughout a canopy if leaf nitrogen content and photosynthetic properties are adapted to local light so that canopy photosynthesis over a day is optimized. To test this hypothesis, 'daily' photosynthesis of individual leaves of Solanum melongena plants was calculated from instantaneous rates of photosynthesis integrated over the daylight hours. Instantaneous photosynthesis was estimated from the photosynthetic responses to photosynthetically active radiation (PAR) and from the incident PAR measured on individual leaves during clear and overcast days. Plants were grown with either abundant or scarce N fertilization. Both net and gross daily photosynthesis of leaves were linearly related to daily incident PAR exposure of individual leaves, which implies constant PhRUE over a day throughout the canopy. The slope of these relationships (i.e. PhRUE) increased with N fertilization. When the relationship was calculated for hourly instead of daily periods, the regressions were curvilinear, implying that PhRUE changed with time of the day and incident radiation. Thus, linearity (i.e. constant PhRUE) was achieved only when data were integrated over the entire day. Using average PAR in place of instantaneous incident PAR increased the slope of the relationship between daily photosynthesis and incident PAR of individual leaves, and the regression became curvilinear. The slope of the relationship between daily gross photosynthesis and incident PAR of individual leaves increased for an overcast compared with a clear day, but the slope remained constant for net photosynthesis. This suggests that net PhRUE of all leaves (and thus of the whole canopy) may be constant when integrated over a day, not only when the incident PAR changes with depth in the canopy, but also when it varies on the same leaf owing to changes in daily incident PAR above the canopy. The
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Estimating Forward Pricing Function: How Efficient is Indian Stock Index Futures Market?
Prasad Bhattacharaya; Harminder Singh
2006-01-01
This paper uses Indian stock futures data to explore unbiased expectations and efficient market hypothesis. Having experienced voluminous transactions within a short time span after its establishment, the Indian stock futures market provides an unparalleled case for exploring these issues involving expectation and efficiency. Besides analyzing market efficiency between cash and futures prices using cointegration and error correction frameworks, the efficiency hypothesis is also investigated a...
Context-Aware Hierarchy k-Depth Estimation and Energy-Efficient Clustering in Ad-hoc Network
Mun, Chang-Min; Kim, Young-Hwan; Lee, Kang-Whan
Ad-hoc Network needs an efficient node management because the wireless network has energy constraints. Previously proposed a hierarchical routing protocol reduce the energy consumption and prolong the network lifetime. Further, there is a deficiency conventional works about the energy efficient depth of cluster associated with the overhead. In this paper, we propose a novel top-down method clustered hierarchy, CACHE(Context-aware Clustering Hierarchy and Energy-Efficient). The proposed analysis could estimate the optimum k-depth of hierarchy architecture in clustering protocols.
The efficiency of modified jackknife and ridge type regression estimators: a comparison
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2008-09-01
Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.
Quantitative shape analysis with weighted covariance estimates for increased statistical efficiency.
Ragheb, Hossein; Thacker, Neil A; Bromiley, Paul A; Tautz, Diethard; Schunke, Anja C
2013-04-02
The introduction and statistical formalisation of landmark-based methods for analysing biological shape has made a major impact on comparative morphometric analyses. However, a satisfactory solution for including information from 2D/3D shapes represented by 'semi-landmarks' alongside well-defined landmarks into the analyses is still missing. Also, there has not been an integration of a statistical treatment of measurement error in the current approaches. We propose a procedure based upon the description of landmarks with measurement covariance, which extends statistical linear modelling processes to semi-landmarks for further analysis. Our formulation is based upon a self consistent approach to the construction of likelihood-based parameter estimation and includes corrections for parameter bias, induced by the degrees of freedom within the linear model. The method has been implemented and tested on measurements from 2D fly wing, 2D mouse mandible and 3D mouse skull data. We use these data to explore possible advantages and disadvantages over the use of standard Procrustes/PCA analysis via a combination of Monte-Carlo studies and quantitative statistical tests. In the process we show how appropriate weighting provides not only greater stability but also more efficient use of the available landmark data. The set of new landmarks generated in our procedure ('ghost points') can then be used in any further downstream statistical analysis. Our approach provides a consistent way of including different forms of landmarks into an analysis and reduces instabilities due to poorly defined points. Our results suggest that the method has the potential to be utilised for the analysis of 2D/3D data, and in particular, for the inclusion of information from surfaces represented by multiple landmark points.
Automatic Error Analysis Using Intervals
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Callot, Laurent
We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...
Chiu, Jill M Y; Degger, Natalie; Leung, Jonathan Y S; Po, Beverly H K; Zheng, Gene J; Richardson, Bruce J; Lau, T C; Wu, Rudolf S S
2016-11-15
The wide occurrence of endocrine disrupting chemicals (EDCs) and heavy metals in coastal waters has drawn global concern, and thus their removal efficiencies in sewage treatment processes should be estimated. However, low concentrations coupled with high temporal fluctuations of these pollutants present a monitoring challenge. Using semi-permeable membrane devices (SPMDs) and Artificial Mussels (AMs), this study investigates a novel approach to evaluating the removal efficiency of five EDCs and six heavy metals in primary treatment, secondary treatment and chemically enhanced primary treatment (CEPT) processes. In general, the small difference between maximum and minimum values of individual EDCs and heavy metals measured from influents/effluents of the same sewage treatment plant suggests that passive sampling devices can smooth and integrate temporal fluctuations, and therefore have the potential to serve as cost-effective monitoring devices for the estimation of the removal efficiencies of EDCs and heavy metals in sewage treatment works. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Cizelj, L.
1994-10-01
In this report, an original probabilistic model aimed to assess the efficiency of particular maintenance strategy in terms of tube failure probability is proposed. The model concentrates on axial through wall cracks in the residual stress dominated tube expansion transition zone. It is based on the recent developments in probabilistic fracture mechanics and accounts for scatter in material, geometry and crack propagation data. Special attention has been paid to model the uncertainties connected to non-destructive examination technique (e.g., measurement errors, non-detection probability). First and second order reliability methods (FORM and SORM) have been implemented to calculate the failure probabilities. This is the first time that those methods are applied to the reliability analysis of components containing stress-corrosion cracks. In order to predict the time development of the tube failure probabilities, an original linear elastic fracture mechanics based crack propagation model has been developed. It accounts for the residual and operating stresses together. Also, the model accounts for scatter in residual and operational stresses due to the random variations in tube geometry and material data. Due to the lack of reliable crack velocity vs load data, the non-destructive examination records of the crack propagation have been employed to estimate the velocities at the crack tips. (orig./GL) [Deutsch] In der vorliegenden Arbeit wird ein eigenes probabilistisches Modell vorgeschlagen, das zur Abschaetzung der Effektivitaet einer bestimmten Instandhaltungsstrategie die Wahrscheinlichkeit fuer das Versagen von Rohren verwendet. Das vorgeschlagene Modell beschreibt durchgehende axiale Risse in der von hohen Eigenspannungen charakterisierten Uebergangszone in der Naehe der Befestigung der Dampferzeugerrohre am Einlass des Dampferzeugers. Dabei finden neuere Entwicklungen der probabilistischen Bruchmechanik Anwendung. Streuende Materialkennwerte, Geometriegroessen
Cost Efficiency Estimates for a Sample of Crop and Beef Farms
Langemeier, Michael R.; Jones, Rodney D.
2005-01-01
This paper examines the impact of specialization on the cost efficiency of a sample of crop and beef farms in Kansas. The economic total expense ratio was used to measure cost efficiency. The relationship between the economic total expense ratio and specialization was not significant.
To Estimation of Efficient Usage of Organic Fuel in the Cycle of Steam Power Installations
Directory of Open Access Journals (Sweden)
A. Nesenchuk
2013-01-01
Full Text Available Tendencies of power engineering development in the world were shown in this article. There were carried out the thermodynamic Analysis of efficient usage of different types of fuel. This article shows the obtained result, which reflects that low-calorie fuel (from the point of thermodynamics is more efficient to use at steam power stations then high-energy fuel.
An Integrated Approach for Estimating the Energy Efficiency of Seventeen Countries
Directory of Open Access Journals (Sweden)
Chia-Nan Wang
2017-10-01
Full Text Available Increased energy efficiency is one of the most effective ways to achieve climate change mitigation. This study aims to evaluate the energy efficiency of seventeen countries. The evaluation is based on an integrated method that combines the super slack-based (super SBM model and the Malmquist productivity index (MPI to investigate the energy efficiency of seventeen countries during the period of 2010–2015. The results in this study are that the United States, Columbia, Japan, China, and Saudi Arabia perform the best in energy efficiency, whereas Brazil, Russia, Indonesia, and India perform the worst during the entire sample period. The energy efficiency of these countries arrived mainly from technological improvement. The study provides suggestions for the seventeen countries’ government to control the energy consumption and contribute to environmental protection.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
and feature detection is clearly biased, the estimator is strictly unbiased. The proportionator is compared to the commonly applied sampling technique (systematic uniform random sampling in 2D space or so-called meander sampling) using three biological examples: estimating total number of granule cells in rat...
Robust and Efficient Adaptive Estimation of Binary-Choice Regression Models
Cizek, P.
2007-01-01
The binary-choice regression models such as probit and logit are used to describe the effect of explanatory variables on a binary response vari- able. Typically estimated by the maximum likelihood method, estimates are very sensitive to deviations from a model, such as heteroscedastic- ity and data
The efficient and unbiased estimation of nuclear size variability using the 'selector'
DEFF Research Database (Denmark)
McMillan, A M; Sørensen, Flemming Brandt
1992-01-01
The selector was used to make an unbiased estimation of nuclear size variability in one benign naevocellular skin tumour and one cutaneous malignant melanoma. The results showed that the estimates obtained using the selector were comparable to those obtained using the more time consuming Cavalieri...
Westine, Carl D.
2016-01-01
Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…
Validation of an efficient visual method for estimating leaf area index ...
African Journals Online (AJOL)
This study aimed to evaluate the accuracy and applicability of a visual method for estimating LAI in clonal Eucalyptus grandis × E. urophylla plantations and to compare it with hemispherical photography, ceptometer and LAI-2000® estimates. Destructive sampling for direct determination of the actual LAI was performed in ...
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
An Efficient Operator for the Change Point Estimation in Partial Spline Model.
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-05-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy.
Updated estimation of energy efficiencies of U.S. petroleum refineries.
Energy Technology Data Exchange (ETDEWEB)
Palou-Rivera, I.; Wang, M. Q. (Energy Systems)
2010-12-08
Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels such as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.
Carroll, Raymond
2009-04-23
We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.
Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources
Directory of Open Access Journals (Sweden)
Dayan A. Guimarães
2014-10-01
Full Text Available We propose a simple algorithm for improving the MDL (minimum description length estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion estimator and with a recently-proposed estimator based on the random matrix theory (RMT. It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations.
National Research Council Canada - National Science Library
И. М. Шаповалова
2014-01-01
... the financial mechanism of state regulation of socio-economic development is very important, as the efficiency of functioning of the system is a base for acceptance administrative decisions, directed...
Estimation of economic efficiency from restrictions elimination of speed movement of trains
Directory of Open Access Journals (Sweden)
S.Y. Baydak
2012-08-01
Full Text Available The technique which allows to receive at level of engineering calculations preliminary results of economic efficiency from elimination of restrictions for speed movement of trains is resulted.
On the Estimation Stability of Efficiency and Economies of Scale in Microfinance Institutions
Bolli Thomas; Vo Thi Anh
2012-01-01
This paper uses a panel data set of microfinance institutions (MFI) across the world to compare parametric and non parametric identification strategies of cost efficiency and economies of scale. The results suggest that efficiency rankings of MFIs are robust across methodologies but reveal substantial unobserved heterogeneity across countries. We further find substantial economies of scale for a pure financial production process. However accounting for the multi dimensional production process...
Li, Yunji; Li, Peng; Chen, Wen
2017-09-01
An energy-efficient data transmission scheme for remote state estimation is proposed and experimentally evaluated in this paper. This new transmission strategy is presented by proving an upper bound of the system performance. Stability of the remote estimator is proved under the condition that some of the observation measurements are lost in a random probability. An experimental platform of two coupled water tanks with a wireless sensor node is established to evaluate and verify the proposed transmission scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Bias and Efficiency Tradeoffs in the Selection of Storm Suites Used to Estimate Flood Risk
Directory of Open Access Journals (Sweden)
Jordan R. Fischbach
2016-02-01
Full Text Available Modern joint probability methods for estimating storm surge or flood statistics are based on statistical aggregation of many hydrodynamic simulations that can be computationally expensive. Flood risk assessments that consider changing future conditions due to sea level rise or other drivers often require each storm to be run under a range of uncertain scenarios. Evaluating different flood risk mitigation measures, such as levees and floodwalls, in these future scenarios can further increase the computational cost. This study uses the Coastal Louisiana Risk Assessment model (CLARA to examine tradeoffs between the accuracy of estimated flood depth exceedances and the number and type of storms used to produce the estimates. Inclusion of lower-intensity, higher-frequency storms significantly reduces bias relative to storm suites with a similar number of storms but only containing high-intensity, lower-frequency storms, even when estimating exceedances at very low-frequency return periods.
Directory of Open Access Journals (Sweden)
Markku Renfors
2005-04-01
Full Text Available Line-of-sight signal delay estimation is a crucial element for any mobile positioning system. Estimating correctly the delay of the first arriving path is a challenging topic in severe propagation environments, such as closely spaced multipaths in multiuser scenario. Previous studies showed that there are many linear and nonlinear techniques able to solve closely spaced multipaths when the system is not bandlimited. However, using root raised cosine (RRC pulse shaping introduces additional errors in the delay estimation process compared to the case with rectangular pulse shaping due to the inherent bandwidth limitation. In this paper, we introduce a novel technique for asynchronous WCDMA multipath delay estimation based on deconvolution with a suitable pulse shape, followed by Teager-Kaiser operator. The deconvolution stage is employed to reduce the effect of the bandlimiting pulse shape.
Directory of Open Access Journals (Sweden)
Pin-Chih Wang
2014-09-01
Full Text Available This study is intended to conduct an extended evaluation of sustainability based on the material flow analysis of resource productivity. We first present updated information on the material flow analysis (MFA database in Taiwan. Essential indicators are selected to quantify resource productivity associated with the economy-wide MFA of Taiwan. The study also applies the IPAT (impact-population-affluence-technology master equation to measure trends of material use efficiency in Taiwan and to compare them with those of other Asia-Pacific countries. An extended evaluation of efficiency, in comparison with selected economies by applying data envelopment analysis (DEA, is conducted accordingly. The Malmquist Productivity Index (MPI is thereby adopted to quantify the patterns and the associated changes of efficiency. Observations and summaries can be described as follows. Based on the MFA of the Taiwanese economy, the average growth rates of domestic material input (DMI; 2.83% and domestic material consumption (DMC; 2.13% in the past two decades were both less than that of gross domestic product (GDP; 4.95%. The decoupling of environmental pressures from economic growth can be observed. In terms of the decomposition analysis of the IPAT equation and in comparison with 38 other economies, the material use efficiency of Taiwan did not perform as well as its economic growth. The DEA comparisons of resource productivity show that Denmark, Germany, Luxembourg, Malta, Netherlands, United Kingdom and Japan performed the best in 2008. Since the MPI consists of technological change (frontier-shift or innovation and efficiency change (catch-up, the change in efficiency (catch-up of Taiwan has not been accomplished as expected in spite of the increase in its technological efficiency.
Wang, Sh.-P.; Gong, Z.-M.; Su, X.-Zh.; Liao, J.-Zh.
2017-09-01
Near infrared spectroscopy and the back propagation artificial neural network model in conjunction with backward interval partial least squares algorithm were used to estimate the purchasing price of Enshi yulu young tea shoots. The near-infrared spectra regions most relevant to the tea shoots price model (5700.5-5935.8, 7613.6-7848.9, 8091.8-8327.1, 8331-8566.2, 9287.5-9522.5, and 9526.6-9761.9 cm-1) were selected using backward interval partial least squares algorithm. The first five principal components that explained 99.96% of the variability in those selected spectral data were then used to calibrate the back propagation artificial neural tea shoots purchasing price model. The performance of this model (coefficient of determination for prediction 0.9724; root-mean-square error of prediction 4.727) was superior to those of the back propagation artificial neural model (coefficient of determination for prediction 0.8653, root-mean-square error of prediction 5.125) and the backward interval partial least squares model (coefficient of determination for prediction 0.5932, root-mean-square error of prediction 25.125). The acquisition price model with the combined backward interval partial least squares-back propagation artificial neural network algorithms can evaluate the price of Enshi yulu tea shoots accurately, quickly and objectively.
Directory of Open Access Journals (Sweden)
Dina Miftahutdinova
2015-02-01
Full Text Available Purpose: to give the estimation of efficiency of the use of the authorial training program in setup time for the women’s Ukraine rowing team representatives in the process of preparation to Olympic Games in London. Materials and Methods: 10 sportswomen of higher qualification, that are included to Ukraine rowing team, are participated in research. For the estimation of general and special physical preparedness the standard test and rowing ergometre Concept-2 are used. Results: the end of the preparatory period was observed significant improvement significant general and special physical fitness athletes surveyed, and their deviation from the model performance dropped to 5–7%. Conclusions: the high efficiency of the author training program for sportswomen of Ukrainian rowing team are testified and they became the Olympic champions in London.
Energy Technology Data Exchange (ETDEWEB)
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.
2010-04-14
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Hui, Tin-Yu J; Burt, Austin
2015-05-01
The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article. Copyright © 2015 by the Genetics Society of America.
EFFICIENT BLOCK MATCHING ALGORITHMS FOR MOTION ESTIMATION IN H.264/AVC
Directory of Open Access Journals (Sweden)
P. Muralidhar
2015-02-01
Full Text Available In Scalable Video Coding (SVC, motion estimation and inter-layer prediction play an important role in elimination of temporal and spatial redundancies between consecutive layers. This paper evaluates the performance of widely accepted block matching algorithms used in various video compression standards, with emphasis on the performance of the algorithms for a didactic scalable video codec. Many different implementations of Fast Motion Estimation Algorithms have been proposed to reduce motion estimation complexity. The block matching algorithms have been analyzed with emphasis on Peak Signal to Noise Ratio (PSNR and computations using MATLAB. In addition to the above comparisons, a survey has been done on Spiral Search Motion Estimation Algorithms for Video Coding. A New Modified Spiral Search (NMSS motion estimation algorithm has been proposed with lower computational complexity. The proposed algorithm achieves 72% reduction in computation with a minimal (<1dB reduction in PSNR. A brief introduction to the entire flow of video compression H.264/SVC is also presented in this paper.
Application of Artificial Neural Networks for Efficient High-Resolution 2D DOA Estimation
Directory of Open Access Journals (Sweden)
M. Agatonović
2012-12-01
Full Text Available A novel method to provide high-resolution Two-Dimensional Direction of Arrival (2D DOA estimation employing Artificial Neural Networks (ANNs is presented in this paper. The observed space is divided into azimuth and elevation sectors. Multilayer Perceptron (MLP neural networks are employed to detect the presence of a source in a sector while Radial Basis Function (RBF neural networks are utilized for DOA estimation. It is shown that a number of appropriately trained neural networks can be successfully used for the high-resolution DOA estimation of narrowband sources in both azimuth and elevation. The training time of each smaller network is significantly re¬duced as different training sets are used for networks in detection and estimation stage. By avoiding the spectral search, the proposed method is suitable for real-time ap¬plications as it provides DOA estimates in a matter of seconds. At the same time, it demonstrates the accuracy comparable to that of the super-resolution 2D MUSIC algorithm.
Rhee, Seung-Whee
2017-09-01
In order to separate aluminum from the base-cap of spent fluorescent lamp (SFL), the separation efficiency of hammer crusher unit is estimated by introducing a binary separation theory. The base-cap of SFL is composed by glass fragment, binder, ferrous metal, copper and aluminum. The hammer crusher unit to recover aluminum from the base-cap consists of 3stages of hammer crusher, magnetic separator and vibrating screen. The optimal conditions of rotating speed and operating time in the hammer crusher unit are decided at each stage. At the optimal conditions, the aluminum yield and the separation efficiency of hammer crusher unit are estimated by applying a sequential binary separation theory at each stage. And the separation efficiency between hammer crusher unit and roll crush system is compared to show the performance of aluminum recovery from the base-cap of SFL. Since the separation efficiency can be increased to 99% at stage 3, from the experimental results, it is found that aluminum from the base-cap can be sufficiently recovered by the hammer crusher unit. Copyright © 2017. Published by Elsevier Ltd.
Efficient focusing scheme for transverse velocity estimation using cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
The blood velocity can be estimated by cross-correlation of received RE signals, but only the velocity component along the beam direction is found. A previous paper showed that the complete velocity vector can be estimated, if received signals are focused along lines parallel to the direction...... of the flow. Here a weakly focused transmit field was used along with a simple delay-sum beamformer. A modified method for performing the focusing by employing a special calculation of the delays is introduced, so that a focused emission can be used. The velocity estimation was studied through extensive...... simulations with Field II. A 64-elements, 5 MHz linear array was used. A parabolic velocity profile with a peak velocity of 0.5 m/s was considered for different angles between the flow and the ultrasound beam and for different emit foci. At 60 degrees the relative standard deviation was 0.58 % for a transmit...
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Punjani, Ali; Brubaker, Marcus A; Fleet, David J
2017-04-01
Discovering the 3D atomic-resolution structure of molecules such as proteins and viruses is one of the foremost research problems in biology and medicine. Electron Cryomicroscopy (cryo-EM) is a promising vision-based technique for structure estimation which attempts to reconstruct 3D atomic structures from a large set of 2D transmission electron microscope images. This paper presents a new Bayesian framework for cryo-EM structure estimation that builds on modern stochastic optimization techniques to allow one to scale to very large datasets. We also introduce a novel Monte-Carlo technique that reduces the cost of evaluating the objective function during optimization by over five orders of magnitude. The net result is an approach capable of estimating 3D molecular structure from large-scale datasets in about a day on a single CPU workstation.
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Efficient spectral estimation by MUSIC and ESPRIT with application to sparse FFT
Directory of Open Access Journals (Sweden)
Daniel ePotts
2016-02-01
Full Text Available In spectral estimation, one has to determine all parameters of an exponential sum for finitely many (noisysampled data of this exponential sum.Frequently used methods for spectral estimation are MUSIC (MUltiple SIgnal Classification and ESPRIT (Estimation of Signal Parameters viaRotational Invariance Technique.For a trigonometric polynomial of large sparsity, we present a new sparse fast Fourier transform byshifted sampling and using MUSIC resp. ESPRIT, where the ESPRIT based method has lower computational cost.Later this technique is extended to a new reconstruction of a multivariate trigonometric polynomial of large sparsity for given (noisy values sampled on a reconstructing rank-1 lattice. Numerical experiments illustrate thehigh performance of these procedures.
Satyavada, Harish; Baldi, S.
2018-01-01
The operating principle of condensing boilers is based on exploiting heat from flue gases to pre-heat cold water at the inlet of the boiler: by condensing into liquid form, flue gases recover their latent heat of vaporization, leading to 10–12% increased efficiency with respect to traditional
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina
2012-01-01
by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...
de Graaf, C.S.L.; Kandhai, D.; Sloot, P.M.A.
According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the
C.S.L. de Graaf (Kees); B.D. Kandhai; P.M.A. Sloot
2017-01-01
htmlabstractAccording to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method
Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.
Simpson, William A.
In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…
Roes, A.L.|info:eu-repo/dai/nl/303022388; Patel, M.K.|info:eu-repo/dai/nl/18988097X
2008-01-01
With growing concern on the consequences of climate change and the depletion of fossil fuels, the importance of energy efficiency is globally recognized. In March 2007, the European Council set two key targets to reduce adverse effects of the use of fossil fuels: 1) A reduction of at least 20% in
Dimmick, R. L.; Boyd, A.; Wolochow, H.
1975-01-01
Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.
Estimating crop yield using a satellite-based light use efficiency model
DEFF Research Database (Denmark)
Yuan, Wenping; Chen, Yang; Xia, Jiangzhou
2016-01-01
Satellite-based techniques that provide temporally and spatially continuous information over vegetated surfaces have become increasingly important in monitoring the global agriculture yield. In this study, we examine the performance of a light use efficiency model (EC-LUE) for simulating the gross...
Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations
Choudhury, Bhaskar J.; Quick, B. E.
2003-01-01
Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.
Directory of Open Access Journals (Sweden)
Ana-Maria Buga
Full Text Available BACKGROUND: Neurogenesis persists throughout life in the adult mammalian brain. Because neurogenesis can only be assessed in postmortem tissue, its functional significance remains undetermined, and identifying an in vivo correlate of neurogenesis has become an important goal. By studying pentylenetetrazole-induced brain stimulation in a rat model of kindling we accidentally discovered that 25±1 days periodic stimulation of Sprague-Dawley rats led to a highly efficient increase in seizure susceptibility. METHODOLOGY/PRINCIPAL FINDINGS: By EEG, RT-PCR, western blotting and immunohistochemistry, we show that repeated convulsive seizures with a periodicity of 25±1 days led to an enrichment of newly generated neurons, that were BrdU-positive in the dentate gyrus at day 25±1 post-seizure. At the same time, there was a massive increase in the number of neurons expressing the migratory marker, doublecortin, at the boundary between the granule cell layer and the polymorphic layer in the dorsal hippocampus. Some of these migrating neurons were also positive for NeuN, a marker for adult neurons. CONCLUSION/SIGNIFICANCE: Our results suggest that the increased susceptibility to seizure at day 25±1 post-treatment is coincident with a critical time required for newborn neurons to differentiate and integrate into the existing hippocampal network, and outlines the importance of the dorsal hippocampus for seizure-related neurogenesis. This model can be used as an in vivo correlate of neurogenesis to study basic questions related to neurogenesis and to the neurogenic mechanisms that contribute to the development of epilepsy.
SU-E-I-65: Estimation of Tagging Efficiency in Pseudo-Continuous Arterial Spin Labeling (pCASL) MRI
Energy Technology Data Exchange (ETDEWEB)
Jen, M [Chang Gung University, Taoyuan City, Taiwan (China); Yan, F; Tseng, Y; Chen, C [Taipei Medical University - Shuang Ho Hospital, Ministry of Health and Welf, New Taipei City, Taiwan (China); Lin, C [GE Healthcare, Taiwan (China); GE Healthcare China, Beijing (China); Liu, H [UT MD Anderson Cancer Center, Houston, TX (United States)
2015-06-15
Purpose: pCASL was recommended as a potent approach for absolute cerebral blood flow (CBF) quantification in clinical practice. However, uncertainties of tagging efficiency in pCASL remain an issue. This study aimed to estimate tagging efficiency by using short quantitative pulsed ASL scan (FAIR-QUIPSSII) and compare resultant CBF values with those calibrated by using 2D Phase Contrast (PC) MRI. Methods: Fourteen normal volunteers participated in this study. All images, including whole brain (WB) pCASL, WB FAIR-QUIPSSII and single-slice 2D PC, were collected on a 3T clinical MRI scanner with a 8-channel head coil. DeltaM map was calculated by averaging the subtraction of tag/control pairs in pCASL and FAIR-QUIPSSII images and used for CBF calculation. Tagging efficiency was then calculated by the ratio of mean gray matter CBF obtained from pCASL and FAIR-QUIPSSII. For comparison, tagging efficiency was also estimated with 2D PC, a previously established method, by contrast WB CBF in pCASL and 2D PC. Feasibility of estimation from a short FAIR-QUIPSSII scan was evaluated by number of averages required for obtaining a stable deltaM value. Setting deltaM calculated by maximum number of averaging (50 pairs) as reference, stable results were defined within ±10% variation. Results: Tagging efficiencies obtained by 2D PC MRI (0.732±0.092) were significantly lower than which obtained by FAIRQUIPPSSII (0.846±0.097) (P<0.05). Feasibility results revealed that four pairs of images in FAIR-QUIPPSSII scan were sufficient to obtain a robust calibration of less than 10% differences from using 50 pairs. Conclusion: This study found that reliable estimation of tagging efficiency could be obtained by a few pairs of FAIR-QUIPSSII images, which suggested that calibration scan in a short duration (within 30s) was feasible. Considering recent reports concerning variability of PC MRI-based calibration, this study proposed an effective alternative for CBF quantification with pCASL.
Ytreberg, F Marty; Zuckerman, Daniel M
2004-11-15
A promising method for calculating free energy differences DeltaF is to generate nonequilibrium data via "fast-growth" simulations or by experiments--and then use Jarzynski's equality. However, a difficulty with using Jarzynski's equality is that DeltaF estimates converge very slowly and unreliably due to the nonlinear nature of the calculation--thus requiring large, costly data sets. The purpose of the work presented here is to determine the best estimate for DeltaF given a (finite) set of work values previously generated by simulation or experiment. Exploiting statistical properties of Jarzynski's equality, we present two fully automated analyses of nonequilibrium data from a toy model, and various simulated molecular systems. Both schemes remove at least several k(B)T of bias from DeltaF estimates, compared to direct application of Jarzynski's equality, for modest sized data sets (100 work values), in all tested systems. Results from one of the new methods suggest that good estimates of DeltaF can be obtained using 5-40-fold less data than was previously possible. Extending previous work, the new results exploit the systematic behavior of bias due to finite sample size. A key innovation is better use of the more statistically reliable information available from the raw data.
DEFF Research Database (Denmark)
Fereczkowski, Michal; Jepsen, Morten Løve; Dau, Torsten
2017-01-01
It is well known that pure-tone audiometry does not sufficiently describe individual hearing loss (HL) and that additional measures beyond pure-tone sensitivity might improve the diagnostics of hearing deficits. Specifically, forward masking experiments to estimate basilarmembrane (BM) input...
Relative efficiency of non-parametric error rate estimators in multi ...
African Journals Online (AJOL)
parametric error rate estimators in 2-, 3- and 5-group linear discriminant analysis. The simulation design took into account the number of variables (4, 6, 10, 18) together with the size sample n so that: n/p = 1.5, 2.5 and 5. Three values of the ...
Jiang, George J.; Sluis, Pieter J. van der
1999-01-01
While the stochastic volatility (SV) generalization has been shown to improve the explanatory power over the Black-Scholes model, empirical implications of SV models on option pricing have not yet been adequately tested. The purpose of this paper is to first estimate a multivariate SV model using
Energy Technology Data Exchange (ETDEWEB)
Barrera, M. C.; Recondo, J. A.; Aperribay, M.; Gervas, C.; Fernandez, E.; Alustiza, J. M.
2003-07-01
To evaluate the efficiency of magnetic resonance (MR) in the diagnosis of knee lesions and how the results are influenced by the time interval between MR and arthroscopy. 248 knees studied by MR were retrospectively analyzed, as well as those which also underwent arthroscopy. Arthroscopy was considered to be the gold standard, MR diagnostic capacity was evaluated for both meniscal and cruciate ligament lesions. Sensitivity, specificity and Kappa index were calculated for the set of all knees included in the study (248), for those in which the time between MR and arthroscopy was less than or equal to three months (134) and for those in which the time between both procedures was less than or equal to one month. Sensitivity, specificity and Kappa index of the MR had global values of 96.5%, 70% and 71%, respectively. When the interval between MR and arthroscopy was less than or equal to three months, sensitivity, specificity and Kappa index were 95.5%, 75% and 72%, respectively. When it was less than or equal to one month, sensitivity was 100%, specificity was 87.5% and Kappa index was 91%. MR is an excellent tool for the diagnosis of knee lesions. Higher MR values of sensitivity, specificity and Kappa index are obtained when the time interval between both procedures is kept to a minimum. (Author) 11 refs.
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
A Methodology for the Estimation of the Wind Generator Economic Efficiency
Zaleskis, G.
2017-12-01
Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.
Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt
2017-01-01
Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.
Mannan, Haider; Stevenson, Chris
2010-08-01
For prediction of risk of cardiovascular end points using survival models the proportional hazards assumption is often not met. Thus, non-proportional hazards models are more appropriate for developing risk prediction equations in such situations. However, computer program for evaluating the prediction performance of such models has been rarely addressed. We therefore developed SAS macro programs for evaluating the discriminative ability of a non-proportional hazards Weibull model developed by Anderson (1991) and that of a proportional hazards Weibull model using the area under receiver operating characteristic (ROC) curve. Two SAS macro programs for non-proportional hazards Weibull model using Proc NLIN and Proc NLP respectively and model validation using area under ROC curve (with its confidence limits) were written with SAS IML language. A similar SAS macro for proportional hazards Weibull model was also written. The computer program was applied to data on coronary heart disease incidence for a Framingham population cohort. The five risk factors considered were current smoking, age, blood pressure, cholesterol and obesity. The predictive ability of the non-proportional hazard Weibull model was slightly higher than that of its proportional hazard counterpart. An advantage of SAS Proc NLP in terms of the example provided here is that it provides significance level for the parameter estimates whereas Proc NLIN does not. The program is very useful for evaluating the predictive performance of non-proportional and proportional hazards Weibull models.
Meloun, Milan; Dluhosová, Zdenka
2008-04-01
A method for the determination of 1-hydroxypyrene in urine and hexachlorbenzene in water applying the regression triplet in the calibration procedure of chromatographic data has been applied. The detection limit and quantification limit are currently calculated on the basis of the standard deviation of replicate analyses at a single concentration. However, since the standard deviation depends on concentration, these single-concentration techniques result in limits that are directly dependent on spiking concentration. A more rigorous approach requires first careful attention to the three components of the regression triplet (data, model, method), examining (1) the data quality of the proposed model, (2) the model quality and (3) the least-squares method to be used for fulfilment of all least-squares assumptions. For high-performance liquid chromatography determination of 1-hydroxypyrene in urine and gas chromatography analysis of hexachlorbenzene in water, this paper describes the effects of deviations from five basic assumptions The paper considers the correction of deviations: identifying influential points, namely, outliers, the calibration task depends on the regression model used, and the least-squares method is based on the assumptions of the normality of the errors, homoscedasticity and the independence of errors. Results show that the approach developed provides improved estimates of analytical limits and that the single-concentration approaches currently in wide use are seriously flawed.
Latypov, A. F.
2008-12-01
Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.
The fallacy of placing confidence in confidence intervals
Morey, R.D.; Hoekstra, R.; Rouder, J.N.; Lee, M.D.; Wagenmakers, E.-J.
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true
The fallacy of placing confidence in confidence intervals
Morey, Richard D.; Hoekstra, Rink; Rouder, Jeffrey N.; Lee, Michael D.; Wagenmakers, Eric-Jan
2016-01-01
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true
Directory of Open Access Journals (Sweden)
Sergey Kharitonov
2015-06-01
Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.
DEFF Research Database (Denmark)
Løhndorf, Petar Durdevic; Pedersen, Simon; Yang, Zhenyu
2016-01-01
to reach the desired oil production capacity, consequently the discharged amount of oil increases.This leads to oceanic pollution, which has been linked to various negative effects in the marine life. The current legislation requires a maximum oil discharge of 30 parts per million (PPM). The oil in water...... a novel control technology which is based on online and dynamic OiW measurements. This article evaluates some currently available on- line measuring technologies to measure OiW, and the possibility to use these techniques for hydrocyclone efficiency evaluation, model development and as a feedback...
A Methodology for the Estimation of the Wind Generator Economic Efficiency
Directory of Open Access Journals (Sweden)
Zaleskis G.
2017-12-01
Full Text Available Integration of renewable energy sources and the improvement of the technological base may not only reduce the consumption of fossil fuel and environmental load, but also ensure the power supply in regions with difficult fuel delivery or power failures. The main goal of the research is to develop the methodology of evaluation of the wind turbine economic efficiency. The research has demonstrated that the electricity produced from renewable sources may be much more expensive than the electricity purchased from the conventional grid.
Directory of Open Access Journals (Sweden)
Borsukiewicz-Gozdur Aleksandra
2007-01-01
Full Text Available In the work presented are the results of investigations regarding the effectiveness of operation of power plant fed by geothermal water with the flow rate of 100, 150, and 200 m3/h and temperatures of 70, 80, and 90 °C, i. e. geothermal water with the parameters available in some towns of West Pomeranian region as well as in Stargard Szczecinski (86.4 °C, Poland. The results of calculations regard the system of geothermal power plant with possibility of utilization of heat for technological purposes. Analyzed are possibilities of application of different working fluids with respect to the most efficient utilization of geothermal energy. .
Directory of Open Access Journals (Sweden)
Pioz Maryline
2011-04-01
Full Text Available Abstract Understanding the spatial dynamics of an infectious disease is critical when attempting to predict where and how fast the disease will spread. We illustrate an approach using a trend-surface analysis (TSA model combined with a spatial error simultaneous autoregressive model (SARerr model to estimate the speed of diffusion of bluetongue (BT, an infectious disease of ruminants caused by bluetongue virus (BTV and transmitted by Culicoides. In a first step to gain further insight into the spatial transmission characteristics of BTV serotype 8, we used 2007-2008 clinical case reports in France and TSA modelling to identify the major directions and speed of disease diffusion. We accounted for spatial autocorrelation by combining TSA with a SARerr model, which led to a trend SARerr model. Overall, BT spread from north-eastern to south-western France. The average trend SARerr-estimated velocity across the country was 5.6 km/day. However, velocities differed between areas and time periods, varying between 2.1 and 9.3 km/day. For more than 83% of the contaminated municipalities, the trend SARerr-estimated velocity was less than 7 km/day. Our study was a first step in describing the diffusion process for BT in France. To our knowledge, it is the first to show that BT spread in France was primarily local and consistent with the active flight of Culicoides and local movements of farm animals. Models such as the trend SARerr models are powerful tools to provide information on direction and speed of disease diffusion when the only data available are date and location of cases.
Efficient time of arrival estimation in the presence of multipath propagation.
Villemin, Guilhem; Fossati, Caroline; Bourennane, Salah
2013-10-01
Most of acoustical experiments face multipath propagation issues. The times of arrival of different ray paths on a sensor can be very close. To estimate them, high resolution algorithms have been developed. The main drawback of these methods is their need of a full rank spectral matrix of the signals. The frequential smoothing technique overcomes this issue by dividing the received signal spectrum into several overlapping sub-bands. This division yields a transfer matrix that may suffer rank deficiency. In this paper, a new criterion to optimally choose the sub-band frequencies is proposed. Encouraging results were obtained on real-world data.
Shalkov, Anton; Mamaeva, Mariya
2017-11-01
The article considers the questions of application of nondestructive methods control of reducers of conveyor belts as a means of transport. Particular attention is paid to such types of diagnostics of technical condition as thermal control and analysis of the state of lubricants. The urgency of carrying out types of nondestructive testing presented in the article is determined by the increase of energy efficiency of transport systems of coal and mining enterprises, in particular, reducers of belt conveyors. Periodic in-depth spectral-emission diagnostics and monitoring of a temperature mode of operation oil in the operation of the control equipment and its technical condition and prevent the MTBF allows the monitoring of the actual technical condition of the gearbox of a belt conveyor. In turn, the thermal imaging diagnostics reveals defects at the earliest stage of their formation and development, which allows planning the volumes and terms of equipment repair. Presents diagnostics of the technical condition will allow monitoring in time the technical condition of the equipment and avoiding its premature failure. Thereby it will increase the energy efficiency of both the transport system and the enterprise as a whole, and also avoid unreasonable increases in operating and maintenance costs.
Directory of Open Access Journals (Sweden)
Shalkov Anton
2017-01-01
Full Text Available The article considers the questions of application of nondestructive methods control of reducers of conveyor belts as a means of transport. Particular attention is paid to such types of diagnostics of technical condition as thermal control and analysis of the state of lubricants. The urgency of carrying out types of nondestructive testing presented in the article is determined by the increase of energy efficiency of transport systems of coal and mining enterprises, in particular, reducers of belt conveyors. Periodic in-depth spectral-emission diagnostics and monitoring of a temperature mode of operation oil in the operation of the control equipment and its technical condition and prevent the MTBF allows the monitoring of the actual technical condition of the gearbox of a belt conveyor. In turn, the thermal imaging diagnostics reveals defects at the earliest stage of their formation and development, which allows planning the volumes and terms of equipment repair. Presents diagnostics of the technical condition will allow monitoring in time the technical condition of the equipment and avoiding its premature failure. Thereby it will increase the energy efficiency of both the transport system and the enterprise as a whole, and also avoid unreasonable increases in operating and maintenance costs.
Battu, Raminderjit Singh; Singh, Baljeet; Kooner, Rubaljot; Singh, Balwinder
2008-04-09
An analytical method was standardized for the estimation of residues of flubendiamide and its metabolite desiodo flubendiamide in various substrates comprising cabbage, tomato, pigeonpea grain, pigeonpea straw, pigeonpea shell, chilli, and soil. The samples were extracted with acetonitrile, diluted with brine solution, and partitioned into chloroform, dried over anhydrous sodium sulfate, and treated with 500 mg of activated charcoal powder. Final clear extracts were concentrated under vacuum and reconstituted into HPLC grade acetonitrile, and residues were estimated using HPLC equipped with a UV detector at 230 lambda and a C18 column. Acetonitrile/water (60:40 v/v) at 1 mL/min was used as mobile phase. Both flubendiamide and desiodo flubendiamide presented distinct peaks at retention times of 11.07 and 7.99 min, respectively. Consistent recoveries ranging from 85 to 99% for both compounds were observed when samples were spiked at 0.10 and 0.20 mg/kg levels. The limit of quantification of the method was worked out to be 0.01 mg/kg.
Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra
Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.
Cutrignelli, Annalisa; Trapani, Adriana; Lopedota, Angela; Franco, Massimo; Mandracchia, Delia; Denora, Nunzio; Laquintana, Valentino; Trapani, Giuseppe
2011-12-01
The main aim of the present study was to estimate the carrier characteristics affecting the dissolution efficiency of griseofulvin (Gris) containing blends (BLs) using partial least squares (PLS) regression analysis. These systems were prepared at three different drug/carrier weight ratios (1/5, 1/10, and 1/20) by the solvent evaporation method, a well-established method for preparing solid dispersions (SDs). The carriers used were structurally different including polymers, a polyol, acids, bases and sugars. The BLs were characterised at the solid-state by spectroscopic (Fourier transform infrared spectroscopy), thermoanalytical (differential scanning calorimetry) and X-ray diffraction studies and their dissolution behaviours were quantified in terms of dissolution efficiencies (log DE/DE(Gris)). The correlation between the selected descriptors, including parameters for size, lipophilicity, cohesive energy density, and hydrogen bonding capacity and log DE/DE(Gris) (i.e., DE and DE(Gris) are the dissolution efficiencies of the BLs and the pure drug, respectively) was established by PLS regression analysis. Thus two models characterised by satisfactory coefficient of determination were derived. The generated equations point out that aqueous solubility, density, lipophilic/hydrophilic character, dispersive/polar forces and hydrogen bonding acceptor/donor ability of the carrier are important features for dissolution efficiency enhancement. Finally, it could be concluded that the correlations developed may be used to predict at a semiquantitative level the dissolution behaviour of BLs of other essentially neutral drugs possessing hydrogen bonding acceptor groups only.
Directory of Open Access Journals (Sweden)
Latyshev N.V.
2012-03-01
Full Text Available Purpose of work - experimentally to check up efficiency of method of development of the special endurance of sportsmen with the use of control-trainer devices. In an experiment took part 24 sportsmen in age 16 - 17 years. Reliable distinctions are exposed between the groups of sportsmen on indexes in tests on the special physical preparation (heat round hands and passage-way in feet, in a test on the special endurance (on all of indexes of test, except for the amount of the executed exercises in the first period and during work on control-trainer device (work on a trainer during 60 seconds and work on a trainer 3×120 seconds.
Directory of Open Access Journals (Sweden)
Amir Hossein Fallahpour
2017-02-01
Full Text Available There are numerous theoretical approaches to estimating the power conversion efficiency (PCE of organic solar cells (OSCs, ranging from the empirical approach to calculations based on general considerations of thermodynamics. Depending on the level of abstraction and model assumptions, the accuracy of PCE estimation and complexity of the calculation can change dramatically. In particular, PCE estimation with a drift-diffusion approach (widely investigated in the literature, strongly depends on the assumptions made for the physical models and optoelectrical properties of semiconducting materials. This has led to a huge deviation as well as complications in the analysis of simulated results aiming to understand the factors limiting the performance of OSCs. In this work, we intend to highlight the complex relation between mobility, exciton dynamics, nanoscale dimension, and loss mechanisms in one framework. Our systematic analysis represents key information on the sensitivity of the drift-diffusion approach, to estimate how physical parameters and physical processes bind the PCE of the device under the influence of structure, contact, and material layer properties. The obtained results ultimately led to recommendations for putting effort into certain properties to get the most out of avoidable losses, presented the impact and importance of modification of material properties, and in particular, recommended to what degree the design of new material could improve OSC performance.
Nouvellon, Yann; Seen, Danny L.; Rambal, S.; Begue, Agnes; Moran, M. Susan; Kerr, Yann H.; Qi, Jiaguo
1998-12-01
A reliable estimation of primary production of terrestrial ecosystems is often a prerequisite for carrying out land management, while being important also in ecological and climatological studies. At a regional scale, grassland primary production estimates are increasingly being made using satellite data. In a currently used approach, regional Gross, Net and Above-ground Net Primary Productivity (GPP, NPP and ANPP) are derived from the parametric model of Monteith and are calculated as the product of the fraction of incident photosynthetically active radiation absorbed by the canopy (fAPAR) and gross, net and above-ground net production (radiation-use) efficiencies ((epsilon) g, (epsilon) n, (epsilon) an); fAPAR being derived from indices calculated from satellite measured reflectances in the red and near infrared. The accuracy and realism of the primary production values estimated by this approach therefore largely depend on an accurate estimation of (epsilon) g, (epsilon) n and (epsilon) an. However, data are scarce for production efficiencies of semi-arid grasslands, and their time and spatial variations are poorly documented, leading to often large errors on the estimates. In this paper a modeling approach taking into account relevant ecosystem processes and based on extensive field data, is used to estimate sub- seasonal and inter-annual variations of (epsilon) g, (epsilon) n and (epsilon) an of a shortgrass site of Arizona, and to quantitatively explain these variations by these of plant water stress, temperature, leaf aging, and processes such as respiration and changes in allocation pattern. For example, over the 3 study years, the mean (epsilon) g, (epsilon) n, and (epsilon) an were found to be 1.92, 0.74 and 0.29 g DM (MJ APAR)-1 respectively. (epsilon) g and epsilonn exhibited very important inter- annual and seasonal variations mainly due to different water stress conditions during the growing season. Inter-annual variations of (epsilon) an were much
Lipinski, Doug; Mohseni, Kamran
2010-03-01
A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.
Aleksandrov, V. I.; Vasilyeva, M. A.; Pomeranets, I. B.
2017-10-01
The paper presents analytical calculations of specific pressure loss in hydraulic transport of the Kachkanarsky GOK iron ore processing tailing slurry. The calculations are based on the results of the experimental studies on specific pressure loss dependence upon hydraulic roughness of pipelines internal surface lined with polyurethane coating. The experiments proved that hydraulic roughness of polyurethane coating is by the factor of four smaller than that of steel pipelines, resulting in a decrease of hydraulic resistance coefficients entered into calculating formula of specific pressure loss - the Darcy-Weisbach formula. Relative and equivalent roughness coefficients are calculated for pipelines with polyurethane coating and without it. Comparative calculations show that hydrotransport pipelines polyurethane coating application is conductive to a specific energy consumption decrease in hydraulic transport of the Kachkanarsky GOC iron ore processing tailings slurry by the factor of 1.5. The experiments were performed on a laboratory hydraulic test rig with a view to estimate the character and rate of physical roughness change in pipe samples with polyurethane coating. The experiments showed that during the following 484 hours of operation, roughness changed in all pipe samples inappreciably. As a result of processing of the experimental data by the mathematical statistics methods, an empirical formula was obtained for the calculation of operating roughness of polyurethane coating surface, depending on the pipeline operating duration with iron ore processing tailings slurry.
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Yohei; Momma, Akihiko; Kato, Ken; Negishi, Akira; Takano, Kiyonami; Nozaki, Ken; Kato, Tohru [Fuel Cell System Group, Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), AIST Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan)
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-}0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within {+-}1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably. (author)
Energy Technology Data Exchange (ETDEWEB)
Yohei Tanaka; Akihiko Momma; Ken Kato; Akira Negishi; Kiyonami Takano; Ken Nozaki; Tohru Kato [National Institute of Advanced Industrial Science and Technology (AIST), Ibaraki (Japan). Fuel Cell System Group, Energy Technology Research Institute
2009-03-15
Uncertainty of electrical efficiency measurement was investigated for a 10 kW-class SOFC system using town gas. Uncertainty of heating value measured by the gas chromatography method on a mole base was estimated as {+-} 0.12% at 95% level of confidence. Micro-gas chromatography with/without CH{sub 4} quantification may be able to reduce uncertainty of measurement. Calibration and uncertainty estimation methods are proposed for flow-rate measurement of town gas with thermal mass-flow meters or controllers. By adequate calibrations for flowmeters, flow rate of town gas or natural gas at 35 standard litters per minute can be measured within relative uncertainty {+-}1.0% at 95 % level of confidence. Uncertainty of power measurement can be as low as {+-}0.14% when a precise wattmeter is used and calibrated properly. It is clarified that electrical efficiency for non-pressurized 10 kW-class SOFC systems can be measured within 1.0% relative uncertainty at 95% level of confidence with the developed techniques when the SOFC systems are operated relatively stably.
Directory of Open Access Journals (Sweden)
Popkov V.M.
2013-03-01
Full Text Available Research objective: To study the role of prognostic factors in the estimation of risk development of recurrent prostate cancer after treatment by high-intensive focused ultrasound (HIUF. Objects and Research Methods: The research has included 102 patients with morphologically revealed localized prostate cancer by biopsy. They have been on treatment in Clinic of Urology of the Saratov Clinical Hospital n.a. S. R. Mirotvortsev. 102 sessions of initial operative treatment of prostate cancer by the method of HIFU have been performed. The general group of patients (n=102 has been subdivided by the method of casual distribution into two samples: group of patients with absent recurrent tumor and group of patients with the revealed recurrent tumor, by morphological research of biopsy material of residual prostate tissue after HIFU. The computer program has been used to study the signs of outcome of patients with prostate cancer. Results: Risk of development of recurrent prostate cancer has grown with the PSA level raise and its density. The index of positive biopsy columns <0,2 has shown the recurrence of prostate cancer in 17% cases while occurrence of prostate cancer in 59% cases has been determined by the index of 0,5 and higher. The tendency to obvious growth of number of relapses has been revealed by the sum of Glison raise with present perineural invasion. Cases of recurrent prostate cancer have been predominant in patients with lymphovascular invasions. In conclusion it has been worked out that the main signs of recurrent prostate cancer development may include: PSA, PSA density, the sum of Glison, lymphovascular invasion, invasion.
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual
Miyabe, Kanji; Guiochon, Georges
2011-01-01
It is probably impossible to prepare high-performance liquid chromatography (HPLC) columns that have a completely homogeneous packing structure. Many reports in the literature show that the radial distributions of the mobile phase flow velocity and the local column efficiency are not flat, even in columns considered as good. A degree of radial heterogeneity seems to be a common property of all HPLC columns and an important source of peak tailing, which prevents the derivation of accurate information on chromatographic behavior from a straightforward analysis of elution peak profiles. This work reports on a numerical method developed to derive from recorded peak profiles the column efficiency at the column center, the degree of column radial heterogeneity, and the polynomial function that best represents the radial distributions of the flow velocity and the column efficiency. This numerical method was applied to two concrete examples of tailing peak profiles previously described. It was demonstrated that this numerical method is effective to estimate important parameters characterizing the radial heterogeneity of chromatographic columns.
Directory of Open Access Journals (Sweden)
Jaewook Lee
2015-06-01
Full Text Available This paper presents an efficient method for estimating capacity-fade uncertainty in lithium-ion batteries (LIBs in order to integrate them into the battery-management system (BMS of electric vehicles, which requires simple and inexpensive computation for successful application. The study uses the pseudo-two-dimensional (P2D electrochemical model, which simulates the battery state by solving a system of coupled nonlinear partial differential equations (PDEs. The model parameters that are responsible for electrode degradation are identified and estimated, based on battery data obtained from the charge cycles. The Bayesian approach, with parameters estimated by probability distributions, is employed to account for uncertainties arising in the model and battery data. The Markov Chain Monte Carlo (MCMC technique is used to draw samples from the distributions. The complex computations that solve a PDE system for each sample are avoided by employing a polynomial-based metamodel. As a result, the computational cost is reduced from 5.5 h to a few seconds, enabling the integration of the method into the vehicle BMS. Using this approach, the conservative bound of capacity fade can be determined for the vehicle in service, which represents the safety margin reflecting the uncertainty.
Directory of Open Access Journals (Sweden)
Manuel G Scotto
2003-12-01
Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.
El-Serehy, Hamed A; Bahgat, Magdy M; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2014-07-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water.
El-Serehy, Hamed A.; Bahgat, Magdy M.; Al-Rasheid, Khaled; Al-Misned, Fahad; Mortuza, Golam; Shafik, Hesham
2013-01-01
Interest has increased over the last several years in using different methods for treating sewage. The rapid population growth in developing countries (Egypt, for example, with a population of more than 87 millions) has created significant sewage disposal problems. There is therefore a growing need for sewage treatment solutions with low energy requirements and using indigenous materials and skills. Gravel Bed Hydroponics (GBH) as a constructed wetland system for sewage treatment has been proved effective for sewage treatment in several Egyptian villages. The system provided an excellent environment for a wide range of species of ciliates (23 species) and these organisms were potentially very useful as biological indicators for various saprobic conditions. Moreover, the ciliates provided excellent means for estimating the efficiency of the system for sewage purification. Results affirmed the ability of this system to produce high quality effluent with sufficient microbial reduction to enable the production of irrigation quality water. PMID:24955010
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)
2017-11-02
This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...... and the heave acceleration. Resonance excitation, parametric roll and forced roll are all included in the model, albeit with some simplifications. The result is the mean out-crossing rate of the roll angle together with the corresponding most probable wave scenarios (critical wave episodes), leading to user...
DEFF Research Database (Denmark)
Monica, Dario Della; Goranko, Valentin; Montanari, Angelo
2011-01-01
We discuss a family of modal logics for reasoning about relational structures of intervals over (usually) linear orders, with modal operators associated with the various binary relations between such intervals, known as Allen’s interval relations. The formulae of these logics are evaluated at int...
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Brienen, R J W; Gloor, E; Clerici, S; Newton, R; Arppe, L; Boom, A; Bottrell, S; Callaghan, M; Heaton, T; Helama, S; Helle, G; Leng, M J; Mielikäinen, K; Oinonen, M; Timonen, M
2017-08-18
Various studies report substantial increases in intrinsic water-use efficiency (W i ), estimated using carbon isotopes in tree rings, suggesting trees are gaining increasingly more carbon per unit water lost due to increases in atmospheric CO2. Usually, reconstructions do not, however, correct for the effect of intrinsic developmental changes in W i as trees grow larger. Here we show, by comparing W i across varying tree sizes at one CO2 level, that ignoring such developmental effects can severely affect inferences of trees' W i . W i doubled or even tripled over a trees' lifespan in three broadleaf species due to changes in tree height and light availability alone, and there are also weak trends for Pine trees. Developmental trends in broadleaf species are as large as the trends previously assigned to CO2 and climate. Credible future tree ring isotope studies require explicit accounting for species-specific developmental effects before CO2 and climate effects are inferred.Intrinsic water-use efficiency (W i ) reconstructions using tree rings often disregard developmental changes in W i as trees age. Here, the authors compare W i across varying tree sizes at a fixed CO2 level and show that ignoring developmental changes impacts conclusions on trees' W i responses to CO2 or climate.
Li, Shutian; He, Ping; Jin, Jiyun
2013-03-30
Understanding the nitrogen (N) use efficiency and N input/output balance in the agricultural system is crucial for best management of N fertilisers in China. In the last 60 years, N fertiliser consumption correlated positively with grain production. During that period the partial factor productivity of N (PFPN ) declined greatly from more than 1000 kg grain kg⁻¹ N in the 1950s to nearly 30 kg grain kg⁻¹ N in 2008. This change in PFPN could be largely explained by the increase in N rate. The average agronomic efficiency of fertiliser N (AEN ) for rice, wheat and maize during 2000-2010 was 12.6, 8.3 and 11.5 kg kg⁻¹ respectively, which was similar to that in the early 1980s but lower than that in the early 1960s. Estimation based on statistical data showed that a total of 49.16 × 10⁶ t of N was input into Chinese agriculture, of which chemical N, organic fertiliser N, biological fixed N and other sources accounted for 58.2, 24.3, 10.5 and 7.0% respectively. Nitrogen was surplus in all regions, the total N surplus being 10.6 × 10⁶ t (60.6 kg ha⁻¹). The great challenge is to balance the use of current N fertilisers between regions and crops to improve N use efficiency while maintaining or increasing crop production under the high-intensity agricultural system of China. © 2012 Society of Chemical Industry.
El Gharamti, Mohamad
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system\\'s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme. © 2014 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Rui Zhang
2017-05-01
Full Text Available Estimates of regional net primary productivity (NPP are useful in modeling regional and global carbon cycles, especially in karst areas. This work developed a new method to study NPP characteristics and changes in Chongqing, a typical karst area. To estimate NPP accurately, the model which integrated an ecosystem process model (CEVSA with a light use efficiency model (GLOPEM called GLOPEM-CEVSA was applied. The fraction of photosynthetically active radiation (fPAR was derived from remote sensing data inversion based on moderate resolution imaging spectroradiometer atmospheric and land products. Validation analyses showed that the PAR and NPP values, which were simulated by the model, matched the observed data well. The values of other relevant NPP models, as well as the MOD17A3 NPP products (NPP MOD17, were compared. In terms of spatial distribution, NPP decreased from northeast to southwest in the Chongqing region. The annual average NPP in the study area was approximately 534 gC/m2a (Std. = 175.53 from 2001 to 2011, with obvious seasonal variation characteristics. The NPP from April to October accounted for 80.1% of the annual NPP, while that from June to August accounted for 43.2%. NPP changed with the fraction of absorbed PAR, and NPP was also significantly correlated to precipitation and temperature at monthly temporal scales, and showed stronger sensitivity to interannual variation in temperature.
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar
2014-01-01
Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md Nurul Haque
2015-01-01
Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in
Directory of Open Access Journals (Sweden)
Mohammad Manir Hossain Mollah
Full Text Available Identifying genes that are differentially expressed (DE between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA, are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression.The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0 to outlying expressions and larger weights (≤ 1 to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA.Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large
Directory of Open Access Journals (Sweden)
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Directory of Open Access Journals (Sweden)
Vlasta Bari
2014-09-01
Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.
Syrejshchikova, T. I.; Gryzunov, Yu. A.; Smolina, N. V.; Komar, A. A.; Uzbekov, M. G.; Misionzhnik, E. J.; Maksimova, N. M.
2010-05-01
The efficiency of the therapy of psychiatric diseases is estimated using the fluorescence measurements of the conformational changes of human serum albumin in the course of medical treatment. The fluorescence decay curves of the CAPIDAN probe (N-carboxyphenylimide of the dimethylaminonaphthalic acid) in the blood serum are measured. The probe is specifically bound to the albumin drug binding sites and exhibits fluorescence as a reporter ligand. A variation in the conformation of the albumin molecule substantially affects the CAPIDAN fluorescence decay curve on the subnanosecond time scale. A subnanosecond pulsed laser or a Pico-Quant LED excitation source and a fast photon detector with a time resolution of about 50 ps are used for the kinetic measurements. The blood sera of ten patients suffering from depression and treated at the Institute of Psychiatry were preliminary clinically tested. Blood for analysis was taken from each patient prior to the treatment and on the third week of treatment. For ten patients, the analysis of the fluorescence decay curves of the probe in the blood serum using the three-exponential fitting shows that the difference between the amplitudes of the decay function corresponding to the long-lived (9 ns) fluorescence of the probe prior to and after the therapeutic procedure reliably differs from zero at a significance level of 1% ( p < 0.01).
Interval Forecast for Smooth Transition Autoregressive Model ...
African Journals Online (AJOL)
In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...
Multivariate interval-censored survival data
DEFF Research Database (Denmark)
Hougaard, Philip
2014-01-01
, non-parametric models are intrinsically more complicated. It is difficult to derive the intervals with positive mass, and estimated interval probabilities may not be unique. A semi-parametric model makes a compromise, with a parametric model, like a frailty model, for the dependence and a non...
An Optimization-Based Approach to Calculate Confidence Interval on Mean Value with Interval Data
Directory of Open Access Journals (Sweden)
Kais Zaman
2014-01-01
Full Text Available In this paper, we propose a methodology for construction of confidence interval on mean values with interval data for input variable in uncertainty analysis and design optimization problems. The construction of confidence interval with interval data is known as a combinatorial optimization problem. Finding confidence bounds on the mean with interval data has been generally considered an NP hard problem, because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the confidence interval on mean values with interval data. With numerical experimentation, we show that the proposed confidence bound algorithms are scalable in polynomial time with respect to increasing number of intervals. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the current practice for the design optimization with interval data that typically implements the constraints on interval variables through the computation of bounds on mean values from the sampled data, the proposed approach of construction of confidence interval enables more complete implementation of design optimization under interval uncertainty.
Updating representations of temporal intervals.
Danckert, James; Anderson, Britt
2015-12-01
Effectively engaging with the world depends on accurate representations of the regularities that make up that world-what we call mental models. The success of any mental model depends on the ability to adapt to changes-to 'update' the model. In prior work, we have shown that damage to the right hemisphere of the brain impairs the ability to update mental models across a range of tasks. Given the disparate nature of the tasks we have employed in this prior work (i.e. statistical learning, language acquisition, position priming, perceptual ambiguity, strategic game play), we propose that a cognitive module important for updating mental representations should be generic, in the sense that it is invoked across multiple cognitive and perceptual domains. To date, the majority of our tasks have been visual in nature. Given the ubiquity and import of temporal information in sensory experience, we examined the ability to build and update mental models of time. We had healthy individuals complete a temporal prediction task in which intervals were initially drawn from one temporal range before an unannounced switch to a different range of intervals. Separate groups had the second range of intervals switch to one that contained either longer or shorter intervals than the first range. Both groups showed significant positive correlations between perceptual and prediction accuracy. While each group updated mental models of temporal intervals, those exposed to shorter intervals did so more efficiently. Our results support the notion of generic capacity to update regularities in the environment-in this instance based on temporal information. The task developed here is well suited to investigations in neurological patients and in neuroimaging settings.
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)
2017-10-05
This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear
Energy Technology Data Exchange (ETDEWEB)
Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bojda, Nicholas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programs while still saving consumers money?
Confidence Intervals from One One Observation
Rodriguez, Carlos C
2008-01-01
Robert Machol's surprising result, that from a single observation it is possible to have finite length confidence intervals for the parameters of location-scale models, is re-produced and extended. Two previously unpublished modifications are included. First, Herbert Robbins nonparametric confidence interval is obtained. Second, I introduce a technique for obtaining confidence intervals for the scale parameter of finite length in the logarithmic metric. Keywords: Theory/Foundations , Estimation, Prior Distributions, Non-parametrics & Semi-parametrics Geometry of Inference, Confidence Intervals, Location-Scale models
Cheng, Zhen; Jiang, Jingkun; Chen, Changhong; Gao, Jian; Wang, Shuxiao; Watson, John G; Wang, Hongli; Deng, Jianguo; Wang, Buying; Zhou, Min; Chow, Judith C; Pitchford, Marc L; Hao, Jiming
2015-01-20
Aerosol mass scattering efficiency (MSE), used for the scattering coefficient apportionment of aerosol species, is often studied under the condition of low aerosol mass loading in developed countries. Severe pollution episodes with high particle concentration frequently happened in eastern urban China in recent years. Based on synchronous measurement of aerosol physical, chemical, and optical properties at the megacity of Shanghai for two months during autumn 2012, we studied MSE characteristics at high aerosol mass loading. Their relationships with mass concentrations and size distributions were examined. It was found that MSE values from the original US IMPROVE algorithm could not represent the actual aerosol characteristics in eastern China. It results in an underestimation of the measured ambient scattering coefficient by 36%. MSE values in Shanghai were estimated to be 3.5 ± 0.55 m(2)/g for ammonia sulfate, 4.3 ± 0.63 m(2)/g for ammonia nitrate, and 4.5 ± 0.73 m(2)/g for organic matter, respectively. MSEs for three components increased rapidly with increasing mass concentration in low aerosol mass loading, then kept at a stable level after a threshold mass concentration of 12–24 μg/m(3). During severe pollution episodes, particle growth from an initial peak diameter of 200–300 nm to a peak diameter of 500–600 nm accounts for the rapid increase in MSEs at high aerosol mass loading, that is, particle diameter becomes closer to the wavelength of visible lights. This study provides insights of aerosol scattering properties at high aerosol concentrations and implies the necessity of MSE localization for extinction apportionment, especially for the polluted regions.
Fan, L Q; Bailey, D R; Shannon, N H
1995-02-01
Postweaning gain performance and individual feed intake on 271 Hereford and 263 Angus bulls were recorded during three 168-d test periods from 1984 to 1986. Each breed was composed of two lines and within each breed bulls were fed either a high-energy (HD) or a medium-energy (MD) diet. Energy intake was partitioned into energy for maintenance and growth based on predicted individual animal requirements. Estimates of heritability were obtained using Restricted Maximum Likelihood with an individual animal model including fixed effects of year, diet, and covariates of initial weight and backfat change by breed and with line effects for overall data. Bulls fed the HD grew faster and had higher metabolizable energy intake per day (MEI), residual feed consumption (RFC), and gross and net feed efficiency (FE and NFE) (P heritability for Hereford and Angus bulls, respectively, were .46 and .16 for 200-d weaning weight (WWT), .16 and .43 for average daily gain (ADG), .19 and .31 for intake per day (MEI), .43 and .45 for yearling weight (YWT), .07 and .23 for RFC, .08 and .35 for FE, and .14 and .28 for NFE. Genetic and phenotypic correlations between MEI and ADG, MEI and YWT, ADG and YWT, ADG and FE, YWT and FE, and FE and NFE were moderately to highly positive for both breeds. Negative genetic and phenotypic correlations between NFE and ADG show partial correlations of FE with ADG after accounting for energy requirement for maintenance. Residual feed consumption was negatively associated with YWT, FE, and NFE, indicating a possible genetic improvement.
INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL
Directory of Open Access Journals (Sweden)
T. A. Kharkovskaia
2014-05-01
Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.
Schickling, A.; Pinto, F.; Schween, J.; Damm, A.; Crewell, S.; Rascher, U.
2012-12-01
Remote sensing offers a unique possibility for a spatio-temporal investigation of carbon uptake by plant photosynthesis, commonly referred to as gross primary production. Remote sensing approaches used to quantify gross primary production are based on the light-use efficiency model of Monteith, which relates gross primary production to the absorbed photosynthetic active radiation and the efficiency of plants to utilize this radiation for photosynthesis, hence light-use efficiency. Assuming that the absorbed photosynthetic active radiation can be reliably derived from optical measurements, the estimation of the highly variable light-use efficiency remains challenging. In the last years, however, several studies indicated the sun-induced chlorophyll fluorescence as a good proxy for estimation of light-use efficiency. With this presentation we present a novel experiment setup to quantify spatio-temporal patters of light-use efficiency based on monitoring canopy sun-induced chlorophyll fluorescence. A fully automated long-term monitoring system was developed to record diurnal courses of sun-induced fluorescence of different agricultural crops and grassland. Time series of the automated system were used to evaluate temporal variations of sun-induced fluorescence and gross primary production in different ecosystems. In the near future, spatial distribution of sun-induced chlorophyll fluorescence at regional scale will be evaluated using a novel hyperspectral imaging spectrometer (HyPlant) operated from an airborne platform. We will present preliminary results from this novel spectrometer which were obtained during the vegetation period 2012.
Coverage Probability of Wald Interval for Binomial Parameters
Chen, Xinjia
2008-01-01
In this paper, we develop an exact method for computing the minimum coverage probability of Wald interval for estimation of binomial parameters. Similar approach can be used for other type of confidence intervals.
DEFF Research Database (Denmark)
Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente
2003-01-01
Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected b...
African Journals Online (AJOL)
Buchi
optimal interval after which they can conceive especially if the previous birth was by caesarean ... after termination of pregnancy (as early as 14 days). 6 than do women who deliver at term. Majority of these .... Contraceptive Pills (COCPs) are commenced at 6 months post partum in a breast feeding mother because the ...
Indian Academy of Sciences (India)
We identify a subclass of timed automata called product interval automata and develop its theory. These automata consist of a network of timed agents with the key restriction being that there is just one clock for each agent and the way the clocks are read and reset is determined by the distribution of shared actions across ...
Indian Academy of Sciences (India)
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
in both logical and language theoretic terms. We also show that product interval automata are expressive enough to model the timed behaviour of asynchronous digital circuits. Keywords. Timed automata; distributed systems; logic. 1. Introduction. Timed automata as formulated by Alur & Dill (1994) have become a canonical ...
Directory of Open Access Journals (Sweden)
Daud Jones Kachamba
2017-06-01
Full Text Available Applications of unmanned aircraft systems (UASs to assist in forest inventories have provided promising results in biomass estimation for different forest types. Recent studies demonstrating use of different types of remotely sensed data to assist in biomass estimation have shown that accuracy and precision of estimates are influenced by the size of field sample plots used to obtain reference values for biomass. The objective of this case study was to assess the influence of sample plot size on efficiency of UAS-assisted biomass estimates in the dry tropical miombo woodlands of Malawi. The results of a design-based field sample inventory assisted by three-dimensional point clouds obtained from aerial imagery acquired with a UAS showed that the root mean square errors as well as the standard error estimates of mean biomass decreased as sample plot sizes increased. Furthermore, relative efficiency values over different sample plot sizes were above 1.0 in a design-based and model-assisted inferential framework, indicating that UAS-assisted inventories were more efficient than purely field-based inventories. The results on relative costs for UAS-assisted and pure field-based sample plot inventories revealed that there is a trade-off between inventory costs and required precision. For example, in our study if a standard error of less than approximately 3 Mg ha−1 was targeted, then a UAS-assisted forest inventory should be applied to ensure more cost effective and precise estimates. Future studies should therefore focus on finding optimum plot sizes for particular applications, like for example in projects under the Reducing Emissions from Deforestation and Forest Degradation, plus forest conservation, sustainable management of forest and enhancement of carbon stocks (REDD+ mechanism with different geographical scales.
Energy Technology Data Exchange (ETDEWEB)
Stuehrenberg, Lowell; Johnson, Orlay W.
1990-03-01
During 1988, the National Marine Fisheries Service (NMFS) began a 2-year study to address possible sources of error in determining collection efficiency at McNary Dam. We addressed four objectives: determine whether fish from Columbia and Snake Rivers mix as they migrate to McNary Dam, determine whether Columbia and Snake River stocks are collected at the same rates assess whether the time of day fish are released influences their recovery rate, and determine whether guided fish used in collection efficiency estimates ten to bias results. 7 refs., 12 figs., 4 tabs.
Geel, C.; Versluis, W.; Snel, J.F.H.
1997-01-01
The relation between photosynthetic oxygen evolution and Photosystem II electron transport was investigated for the marine algae t Phaeodactylum tricornutum, Dunaliella tertiolecta, Tetraselmis sp., t Isochrysis sp. and t Rhodomonas sp.. The rate of Photosystem II electron transport was estimated
Confidence Intervals for Cronbach's Coefficient Alpha Values
Koning, Alex; Franses, Philip Hans
2003-01-01
textabstractCoefficient Alpha, which is widely used in empirical research, estimates the reliability of a test consisting of parallel items. In practice it is difficult to compare values of alpha across studies as it depends on the number of items used. In this paper we provide a simple solution, which amounts to computing the confidence intervals of an alpha, as these intervals automatically account for differences across the numbers of items. We also give appropriate statistics to test for ...
Applications of interval computations
Kreinovich, Vladik
1996-01-01
Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...
Directory of Open Access Journals (Sweden)
Seog-Chan Oh
2014-09-01
Full Text Available The car manufacturing industry, one of the largest energy consuming industries, has been making a considerable effort to improve its energy intensity by implementing energy efficiency programs, in many cases supported by government research or financial programs. While many car manufacturers claim that they have made substantial progress in energy efficiency improvement over the past years through their energy efficiency programs, the objective measurement of energy efficiency improvement has not been studied due to the lack of suitable quantitative methods. This paper proposes stochastic and deterministic frontier benchmarking models such as the stochastic frontier analysis (SFA model and the data envelopment analysis (DEA model to measure the effectiveness of energy saving initiatives in terms of the technical improvement of energy efficiency for the automotive industry, particularly vehicle assembly plants. Illustrative examples of the application of the proposed models are presented and demonstrate the overall benchmarking process to determine best practice frontier lines and to measure technical improvement based on the magnitude of frontier line shifts over time. Log likelihood ratio and Spearman rank-order correlation coefficient tests are conducted to determine the significance of the SFA model and its consistency with the DEA model. ENERGY STAR® EPI (Energy Performance Index are also calculated.
Directory of Open Access Journals (Sweden)
Edinam Dope Setsoafia
2017-01-01
Full Text Available This study evaluated the profit efficiency of artisanal fishing in the Pru District of Ghana by explicitly computing profit efficiency level, identifying the sources of profit inefficiency, and examining the constraints of artisanal fisheries. Cross-sectional data was obtained from 120 small-scale fishing households using semistructured questionnaire. The stochastic profit frontier model was used to compute profit efficiency level and identify the determinants of profit inefficiency while Garrett ranking technique was used to rank the constraints. The average profit efficiency level was 81.66% which implies that about 82% of the prospective maximum profit was gained due to production efficiency. That is, only 18% of the potential profit was lost due to the fishers’ inefficiency. Also, the age of the household head and household size increase the inefficiency level while experience in artisanal fishing tends to decrease the inefficiency level. From the Garrett ranking, access to credit facility to fully operate the small-scale fishing business was ranked as the most pressing issue followed by unstable prices while perishability was ranked last among the constraints. The study, therefore, recommends that group formation should be encouraged to enable easy access to loans and contract sales to boost profitability.
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2016-04-01
An estimation of signal parameters via a rotational invariance techniques-like (ESPRIT-like) algorithm is proposed to estimate the direction of arrival and direction of departure for bistatic multiple-input multiple-output (MIMO) radar. The properties of a noncircular signal and Euler's formula are first exploited to establish a real-valued bistatic MIMO radar array data, which is composed of sine and cosine data. Then the receiving/transmitting selective matrices are constructed to obtain the receiving/transmitting rotational invariance factors. Since the rotational invariance factor is a cosine function, symmetrical mirror angle ambiguity may occur. Finally, a maximum likelihood function is used to avoid the estimation ambiguities. Compared with the existing ESPRIT, the proposed algorithm can save about 75% of computational load owing to the real-valued ESPRIT algorithm. Simulation results confirm the effectiveness of the ESPRIT-like algorithm.
Bootstrap confidence intervals for the process capability index under half-logistic distribution
Wararit Panichkitkosolkul
2012-01-01
This study concerns the construction of bootstrap confidence intervals for theprocess capability index in the case of half-logistic distribution. The bootstrap confidence intervals applied consist of standard bootstrap confidence interval, percentile bootstrap confidence interval and bias-corrected percentile bootstrap confidence interval. Using Monte Carlo simulations, the estimated coverage probabilities and average widths ofbootstrap confidence intervals are compared, with results showing ...
DEFF Research Database (Denmark)
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given r...
Arsenault, Clement
1998-01-01
Discusses the adoption of the pinyin Romanization standard over Wade-Giles and considers the impact on retrieval in online library catalogs. Describes an investigation that tested three factors that could influence retrieval efficiency: the number of usable syllables, the average number of letters per syllable, and users' familiarity with the…
D-Optimal and D-Efficient Equivalent-Estimation Second-Order Split-Plot Designs
H. Macharia (Harrison); P.P. Goos (Peter)
2010-01-01
textabstractIndustrial experiments often involve factors that are hard to change or costly to manipulate and thus make it undesirable to use a complete randomization. In such cases, the split-plot design structure is a cost-efficient alternative that reduces the number of independent settings of the
Rigorous Verification for the Solution of Nonlinear Interval System ...
African Journals Online (AJOL)
We survey a general method for solving nonlinear interval systems of equations. In particular, we paid special attention to the computational aspects of linear interval systems since the bulk of computations are done during the stage of computing outer estimation of the including linear interval systems. The height of our ...
Ruette, Sylvie
2017-01-01
The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...
Total-Factor Energy Efficiency in BRI Countries: An Estimation Based on Three-Stage DEA Model
Directory of Open Access Journals (Sweden)
Changhong Zhao
2018-01-01
Full Text Available The Belt and Road Initiative (BRI is showing its great influence and leadership on the international energy cooperation. Based on the three-stage DEA model, total-factor energy efficiency (TFEE in 35 BRI countries in 2015 was measured in this article. It shows that the three-stage DEA model could eliminate errors of environment variable and random, which made the result better than traditional DEA model. When environment variable errors and random errors were eliminated, the mean value of TFEE was declined. It demonstrated that TFEE of the whole sample group was overestimated because of external environment impacts and random errors. The TFEE indicators of high-income countries like South Korea, Singapore, Israel and Turkey are 1, which is in the efficiency frontier. The TFEE indicators of Russia, Saudi Arabia, Poland and China are over 0.8. And the indicators of Uzbekistan, Ukraine, South Africa and Bulgaria are in a low level. The potential of energy-saving and emissions reduction is great in countries with low TFEE indicators. Because of the gap in energy efficiency, it is necessary to distinguish different countries in the energy technology options, development planning and regulation in BRI countries.
Kozai, Toyoki
2013-01-01
Extensive research has recently been conducted on plant factory with artificial light, which is one type of closed plant production system (CPPS) consisting of a thermally insulated and airtight structure, a multi-tier system with lighting devices, air conditioners and fans, a CO2 supply unit, a nutrient solution supply unit, and an environment control unit. One of the research outcomes is the concept of resource use efficiency (RUE) of CPPS.This paper reviews the characteristics of the CPPS compared with those of the greenhouse, mainly from the viewpoint of RUE, which is defined as the ratio of the amount of the resource fixed or held in plants to the amount of the resource supplied to the CPPS.It is shown that the use efficiencies of water, CO2 and light energy are considerably higher in the CPPS than those in the greenhouse. On the other hand, there is much more room for improving the light and electric energy use efficiencies of CPPS. Challenging issues for CPPS and RUE are also discussed.
Legg, P A; Rosin, P L; Marshall, D; Morgan, J E
2013-01-01
Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of improving the estimation by means of optimal histogram bin size selection. This provokes the common question of how many bins should actually be used when constructing a histogram. There is no definitive answer to this. This question itself has received little attention in the MI literature, and yet this issue is critical to the effectiveness of the algorithm. The purpose of this paper is to highlight this fundamental element of the MI algorithm. We present a comprehensive study that introduces methods from statistics literature and incorporates these for image registration. We demonstrate this work for registration of multi-modal retinal images: colour fundus photographs and scanning laser ophthalmoscope images. The registration of these modalities offers significant enhancement to early glaucoma detection, however traditional registration techniques fail to perform sufficiently well. We find that adaptive probability density estimation heavily impacts on registration accuracy and runtime, improving over traditional binning techniques. Copyright © 2013 Elsevier Ltd. All rights reserved.
Interval Mapping of Multiple Quantitative Trait Loci
Jansen, Ritsert C.
1993-01-01
The interval mapping method is widely used for the mapping of quantitative trait loci (QTLs) in segregating generations derived from crosses between inbred lines. The efficiency of detecting and the accuracy of mapping multiple QTLs by using genetic markers are much increased by employing multiple
A Gompertz model for birth interval analysis.
Ross, J A; MAdhavan, Shantha
1981-11-01
During recent years birth intervals have been analysed on a life table basis. This method retains both closed and open intervals, and so reflects behaviour that deliberately avoids the next birth entirely. When life tables are prepared separately for each birth order, markedly different patterns of movement toward the next birth can appear from one parity to the next. This is illustrated for Korean survey data, with historical trends given across marriage cohorts. A Gompertz model is found to fit the family of curves that show the cumulative proportion giving birth within each interval closely. Its three parameters have direct intuitive interpretations, one being equal to the parity progression ratio and the other two controlling the pace of childbearing before and after the point of peak activity within the interval. The model is useful for interpolation and projection, and provides an efficient summary of the otherwise cumbersome detail given in a life table. Testing against additional data sets is suggested.
Direct Interval Forecasting of Wind Power
DEFF Research Database (Denmark)
Wan, Can; Xu, Zhao; Pinson, Pierre
2013-01-01
This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness, wit......, without the prior knowledge of forecasting errors. The proposed approach has been proved to be highly efficient and reliable through preliminary case studies using real-world wind farm data, indicating a high potential of practical application.......This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the
Energy Technology Data Exchange (ETDEWEB)
Bengel, F.M.; Nekolla, S.; Schwaiger, M. [Technische Univ. Muenchen (Germany). Nuklearmedizinische Klinik und Poliklinik; Permanetter, B. [Abteilung Innere Medizin, Kreiskrankenhaus Wasserburg/Inn (Germany); Ungerer, M. [Technische Univ. Muenchen (Germany). 1. Medizinische Klinik und Poliklinik
2000-03-01
We studied ten patients with idiopathic dilated cardiomyopathy (DCM) and 11 healthy normals by dynamic PET with {sup 11}C-acetate and either tomographic radionuclide ventriculography or cine magnetic resonance imaging. A ''stroke work index'' (SWI) was calculated by: SWI = systolic blood pressure x stroke volume/body surface area. To estimate myocardial efficiency, a ''work-metabolic index'' (WMI) was then obtained as follows: WMI = SWI x heart rate/k(mono), where k(mono) is the washout constant for {sup 11}C-acetate derived from mono-exponential fitting. In DCM patients, left ventricular ejection fraction was 19%{+-}10% and end-diastolic volume was 92{+-}28 ml/m{sup 2} (vs 64%{+-}7% and 55{+-}8 ml/m{sup 2} in normals, P<0.001). Myocardial oxidative metabolism, reflected by k(mono), was significantly lower compared with that in normals (0.040{+-}0.011/min vs 0.060{+-} 0.015/min; P<0.003). The SWI (1674{+-}761 vs 4736{+-} 895 mmHg x ml/m{sup 2}; P<0.001) and the WMI as an estimate of efficiency (2.98{+-}1.30 vs 6.20{+-}2.25 x 10{sup 6} mmHg x ml/m{sup 2}; P<0.001) were lower in DCM patients, too. Overall, the WMI correlated positively with ejection parameters (r=0.73, P<0.001 for ejection fraction; r=0.93, P<0.001 for stroke volume), and inversely with systemic vascular resistance (r=-0.77; P<0.001). There was a weak positive correlation between WMI and end-diastolic volume in normals (r=0.45; P=0.17), while in DCM patients, a non-significant negative correlation coefficient (r=-0.21; P=0.57) was obtained. In conclusion non-invasive estimates of oxygen consumption and efficiency in the failing heart were reduced compared with those in normals. Estimates of efficiency increased with increasing contractile performance, and decreased with increasing ventricular afterload. In contrast to normals, the failing heart was not able to respond with an increase in efficiency to increasing ventricular volume.(orig./MG) (orig.)
Interval methods: An introduction
DEFF Research Database (Denmark)
Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj
2006-01-01
. An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...
Directory of Open Access Journals (Sweden)
Yury G. Odegov
2016-01-01
Full Text Available In conditions of increasing competition, the problems of efficiency increase of activity of the company are significantly actualized, which directly depends on efficiency of labour activity of every employee and the implemented business model of the organization. On this basis the aim of the research is to analyze existing indicators of performance evaluation of the labour activities of both the employee and the business model of the organization.The theoretical basis of the study consists of principles of the economic theory, the works of native and foreign experts in the field of job evaluation. The information base of the research consists of economic and legal literature dealing with problems of this study, the data published in periodicals, materials of Russian scientific conferences, seminars, and Internet resources.In this article I have used and found the application of scientific methods of data collection, methods of research and methods of assessing their credibility: quantitative, comparative, logical analysis and synthesis.The modern business concern about the accumulation of wealth of shareholders, giving the company stability, growth and efficiency inevitably leads to necessity of creation and development of technologies aimed at improving the productivity of employees. The paper presents a comparative analysis of different approaches to assessing the labour effectiveness.The performance of the work is the ratio of the four essential parameters that determine the measure of efficiency of persons’ activity: the quantity and quality of result of work (a service, material product or technology in relation to spend time and cost on its production. The use of employees («performance» should be in the following way that they could achieve the planned results in the workplace. The authors have noted that to develop of technologies for the measurement of productivity it is very important to use the procedures and indicators that are
Akhmetova, I. G.; Chichirova, N. D.
2017-11-01
When conducting an energy survey of heat supply enterprise operating several boilers located not far from each other, it is advisable to assess the degree of heat supply efficiency from individual boiler, the possibility of energy consumption reducing in the whole enterprise by switching consumers to a more efficient source, to close in effective boilers. It is necessary to consider the temporal dynamics of perspective load connection, conditions in the market changes. To solve this problem the radius calculation of the effective heat supply from the thermal energy source can be used. The disadvantage of existing methods is the high complexity, the need to collect large amounts of source data and conduct a significant amount of computational efforts. When conducting an energy survey of heat supply enterprise operating a large number of thermal energy sources, rapid assessment of the magnitude of the effective heating radius requires. Taking into account the specifics of conduct and objectives of the energy survey method of calculation of effective heating systems radius, to use while conducting the energy audit should be based on data available heat supply organization in open access, minimize efforts, but the result should be to match the results obtained by other methods. To determine the efficiency radius of Kazan heat supply system were determined share of cost for generation and transmission of thermal energy, capital investment to connect new consumers. The result were compared with the values obtained with the previously known methods. The suggested Express-method allows to determine the effective radius of the centralized heat supply from heat sources, in conducting energy audits with the effort minimum and the required accuracy.
Interval Female Sterilization.
Stuart, Gretchen S; Ramesh, Shanthi S
2018-01-01
Female sterilization is relied on by nearly one in three women aged 35-44 years in the United States. Sterilization procedures are among the most common procedures that obstetrician-gynecologists perform. The most frequent sterilization procedures include postpartum tubal ligation, laparoscopic tubal disruption or salpingectomy, and hysteroscopic tubal occlusion. The informed consent process for sterilization is crucial and requires shared decision-making between the patient and the health care provider. Counseling should include the specific risks and benefits of the specific surgical approaches. Additionally, women should be counseled on the alternatives to sterilization, including intrauterine contraceptives and subdermal contraceptive implants. Complications, including unplanned pregnancy after successful female sterilization, are rare. The objectives of this Clinical Expert Series are to describe the epidemiology of female sterilization, access to postpartum sterilization, advances in interval sterilization techniques, and clinical considerations in caring for women requesting sterilization.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Musakhanov A.K.
2012-12-01
Full Text Available The questions of efficiency of mastering of technique of fight are considered for a capture for young judoists. Directions are selected the use of methods of the strictly regulated exercise and playing methods. In research 28 judoists took part in age 8-10 years. Duration of experiment two weeks. In one group of youths conducted game on snatching out of ribbons (clothes-pins and bandages, fastened on the kimono of opponent. In the second group work of taking of basic captures and educational meetings was conducted on a task on taking of capture. The training program contained playing methods and methods of the strictly regulated exercise. Comparison of the trainings programs defined specificity of their affecting development of different indexes of technique of fight for a capture. Recommended in training on the technique of fight for a capture the combined use of methods of the strictly regulated exercise and playing methods.
Directory of Open Access Journals (Sweden)
Riesgo Ana
2012-11-01
Full Text Available Abstract Introduction Traditionally, genomic or transcriptomic data have been restricted to a few model or emerging model organisms, and to a handful of species of medical and/or environmental importance. Next-generation sequencing techniques have the capability of yielding massive amounts of gene sequence data for virtually any species at a modest cost. Here we provide a comparative analysis of de novo assembled transcriptomic data for ten non-model species of previously understudied animal taxa. Results cDNA libraries of ten species belonging to five animal phyla (2 Annelida [including Sipuncula], 2 Arthropoda, 2 Mollusca, 2 Nemertea, and 2 Porifera were sequenced in different batches with an Illumina Genome Analyzer II (read length 100 or 150 bp, rendering between ca. 25 and 52 million reads per species. Read thinning, trimming, and de novo assembly were performed under different parameters to optimize output. Between 67,423 and 207,559 contigs were obtained across the ten species, post-optimization. Of those, 9,069 to 25,681 contigs retrieved blast hits against the NCBI non-redundant database, and approximately 50% of these were assigned with Gene Ontology terms, covering all major categories, and with similar percentages in all species. Local blasts against our datasets, using selected genes from major signaling pathways and housekeeping genes, revealed high efficiency in gene recovery compared to available genomes of closely related species. Intriguingly, our transcriptomic datasets detected multiple paralogues in all phyla and in nearly all gene pathways, including housekeeping genes that are traditionally used in phylogenetic applications for their purported single-copy nature. Conclusions We generated the first study of comparative transcriptomics across multiple animal phyla (comparing two species per phylum in most cases, established the first Illumina-based transcriptomic datasets for sponge, nemertean, and sipunculan species, and
Riesgo, Ana; Andrade, Sónia C S; Sharma, Prashant P; Novo, Marta; Pérez-Porro, Alicia R; Vahtera, Varpu; González, Vanessa L; Kawauchi, Gisele Y; Giribet, Gonzalo
2012-11-29
Traditionally, genomic or transcriptomic data have been restricted to a few model or emerging model organisms, and to a handful of species of medical and/or environmental importance. Next-generation sequencing techniques have the capability of yielding massive amounts of gene sequence data for virtually any species at a modest cost. Here we provide a comparative analysis of de novo assembled transcriptomic data for ten non-model species of previously understudied animal taxa. cDNA libraries of ten species belonging to five animal phyla (2 Annelida [including Sipuncula], 2 Arthropoda, 2 Mollusca, 2 Nemertea, and 2 Porifera) were sequenced in different batches with an Illumina Genome Analyzer II (read length 100 or 150 bp), rendering between ca. 25 and 52 million reads per species. Read thinning, trimming, and de novo assembly were performed under different parameters to optimize output. Between 67,423 and 207,559 contigs were obtained across the ten species, post-optimization. Of those, 9,069 to 25,681 contigs retrieved blast hits against the NCBI non-redundant database, and approximately 50% of these were assigned with Gene Ontology terms, covering all major categories, and with similar percentages in all species. Local blasts against our datasets, using selected genes from major signaling pathways and housekeeping genes, revealed high efficiency in gene recovery compared to available genomes of closely related species. Intriguingly, our transcriptomic datasets detected multiple paralogues in all phyla and in nearly all gene pathways, including housekeeping genes that are traditionally used in phylogenetic applications for their purported single-copy nature. We generated the first study of comparative transcriptomics across multiple animal phyla (comparing two species per phylum in most cases), established the first Illumina-based transcriptomic datasets for sponge, nemertean, and sipunculan species, and generated a tractable catalogue of annotated genes (or gene
Chen, Y-C; Clegg, R M
2011-10-01
A spectrograph with continuous wavelength resolution has been integrated into a frequency-domain fluorescence lifetime-resolved imaging microscope (FLIM). The spectral information assists in the separation of multiple lifetime components, and helps resolve signal cross-talking that can interfere with an accurate analysis of multiple lifetime processes. This extends the number of different dyes that can be measured simultaneously in a FLIM measurement. Spectrally resolved FLIM (spectral-FLIM) also provides a means to measure more accurately the lifetime of a dim fluorescence component (as low as 2% of the total intensity) in the presence of another fluorescence component with a much higher intensity. A more reliable separation of the donor and acceptor fluorescence signals are possible for Förster resonance energy transfer (FRET) measurements; this allows more accurate determinations of both donor and acceptor lifetimes. By combining the polar plot analysis with spectral-FLIM data, the spectral dispersion of the acceptor signal can be used to derive the donor lifetime - and thereby the FRET efficiency - without iterative fitting. The lifetime relation between the donor and acceptor, in conjunction with spectral dispersion, is also used to separate the FRET pair signals from the donor alone signal. This method can be applied further to quantify the signals from separate FRET pairs, and provide information on the dynamics of the FRET pair between different states. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
Gurauskiene, Inga; Stasiskiene, Zaneta
2011-07-01
Electrical and electronic equipment (EEE) has penetrated everyday life. The EEE industry is characterized by a rapid technological change which in turn prompts consumers to replace EEE in order to keep in step with innovations. These factors reduce an EEE life span and determine the exponential growth of the amount of obsolete EEE as well as EEE waste (e-waste). E-waste management systems implemented in countries of the European Union (EU) are not able to cope with the e-waste problem properly, especially in the new EU member countries. The analysis of particular e-waste management systems is essential in evaluation of the complexity of these systems, describing and quantifying the flows of goods throughout the system, and all the actors involved in it. The aim of this paper is to present the research on the regional agent based material flow analysis in e-waste management systems, as a measure to reveal the potential points for improvement. Material flow analysis has been performed as a flow of goods (EEE). The study has shown that agent-based EEE flow analysis incorporating a holistic and life cycle thinking approach in national e-waste management systems gives a broader view to the system than a common administrative one used to cover. It helps to evaluate the real efficiency of e-waste management systems and to identify relevant impact factors determining the current operation of the system.
Falandysz, Jerzy
2014-12-01
Mushroom Cortinarius caperatus is one of the several edible wild-grown species that are widely collected by fanciers. For specimens collected from 20 spatially and distantly distributed sites in Poland the median values of Hg contents of caps ranged from 0.81 to 2.4mgkg(-1) dry matter and in stipes they were 2.5-fold lower. C. caperatus efficiently accumulates Hg and the median values of the bioconcentration factor for caps range from 120 to 18 and for stipes from 47 to 7.3. This mushroom even when collected at background (uncontaminated) forested areas could be a source of elevated intake of Hg. The irregular consumption of the caps or whole fruiting bodies is not considered to pose a risk. Frequent eating of C. caperatus during the fruiting season by fanciers should be avoided because of possible health risk from Hg. Available data on Hg contents of C. caperatus from several places in Europe are also summarized. Copyright © 2014 Elsevier Inc. All rights reserved.
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P < 0.01) and depth (D; P = 0.03) had significant effects on capture probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
Contrasting Diversity Values: Statistical Inferences Based on Overlapping Confidence Intervals
MacGregor-Fors, Ian; Payton, Mark E.
2013-01-01
Ecologists often contrast diversity (species richness and abundances) using tests for comparing means or indices. However, many popular software applications do not support performing standard inferential statistics for estimates of species richness and/or density. In this study we simulated the behavior of asymmetric log-normal confidence intervals and determined an interval level that mimics statistical tests with P(α) = 0.05 when confidence intervals from two distributions do not overlap. Our results show that 84% confidence intervals robustly mimic 0.05 statistical tests for asymmetric confidence intervals, as has been demonstrated for symmetric ones in the past. Finally, we provide detailed user-guides for calculating 84% confidence intervals in two of the most robust and highly-used freeware related to diversity measurements for wildlife (i.e., EstimateS, Distance). PMID:23437239
Jiao, S; Maltecca, C; Gray, K A; Cassady, J P
2014-06-01
The efficiency of producing salable products in the pork industry is largely determined by costs associated with feed and by the amount and quality of lean meat produced. The objectives of this paper were 1) to explore heritability and genetic correlations for growth, feed efficiency, and real-time ultrasound traits using both pedigree and marker information and 2) to assess accuracy of genomic prediction for those traits using Bayes A prediction models in a Duroc terminal sire population. Body weight at birth (BW at birth) and weaning (BW at weaning) and real-time ultrasound traits, including back fat thickness (BF), muscle depth (MD), and intramuscular fat content (IMF), were collected on the basis of farm protocol. Individual feed intake and serial BW records of 1,563 boars obtained from feed intake recording equipment (FIRE; Osborne Industries Inc., Osborne, KS) were edited to obtain growth, feed intake, and feed efficiency traits, including ADG, ADFI, feed conversion ratio (FCR), and residual feed intake (RFI). Correspondingly, 1,047 boars were genotyped using the Illumina PorcineSNP60 BeadChip. The remaining 516 boars, as an independent sample, were genotyped with a low-density GGP-Porcine BeadChip and imputed to 60K. Magnitudes of heritability from pedigree analysis were moderate for growth, feed intake, and ultrasound traits (ranging from 0.44 ± 0.11 for ADG to 0.58 ± 0.09 for BF); heritability estimates were 0.32 ± 0.09 for FCR but only 0.10 ± 0.05 for RFI. Comparatively, heritability estimates using marker information by Bayes A models were about half of those from pedigree analysis, suggesting "missing heritability." Moderate positive genetic correlations between growth and feed intake (0.32 ± 0.05) and back fat (0.22 ± 0.04), as well as negative genetic correlations between growth and feed efficiency traits (-0.21 ± 0.08, -0.05 ± 0.07), indicate selection solely on growth traits may lead to an undesirable increase in feed intake, back fat, and
Directory of Open Access Journals (Sweden)
Tatyana V. Svishchuk
2017-06-01
manifested in the growth of savings rate and the number of competitive procedures. With economicmathematical methods the authors proved the hypothesis that the saving rate increased following the results of procurement procedures if the number of participants of the auctions and tenders increased. According to the results of the analysis proposals for improving the contract procurement system are formulated. Practical significance the proposed recommendations can be used by state customers and public authorities in procurement procedures and changes in laws and regulations in the field of public procurement with the aim of improving the efficiency of the contract system.
Directory of Open Access Journals (Sweden)
José Boaventura Magalhães Rodrigues
2017-06-01
Full Text Available Abstract Although Overall Equipment Effectiveness – OEE has been proven a useful tool to measure the efficiency of a single piece of equipment in a food processing plant it is possible to expand its concept to assess the performance of a whole production line assembled in series. This applies to the special case that all pieces of equipment are programmed to run at similar throughput of the system’s constraint. Such procedure has the advantage to allow for simpler data collection to support operations improvement strategy. This article presents an approach towards continuous improvement adapted for food processing industries that have limited budget and human resources to install and run complex automated data collection and computing systems. It proposes the use of data collected from the packing line to mimic the whole unit’s efficiency and suggests a heuristic method based on the geometric properties of OEE to define what parameters shall be targeted to plot an improvement plan. In addition, it is shown how OEE correlates with earnings, allowing for the calculation of the impact of continuous process improvement to business results. The analysis of data collected in a commercial food processing unit made possible: (i the identification of the major causes of efficiency loss by assessing the performance of packing equipment; (ii the definition of an improvement strategy to elevate OEE from 53.9% to 74.1% and; (iii the estimate that by implementing such strategy an increase of 88% on net income is attained.
Directory of Open Access Journals (Sweden)
Tianxiang Cui
2017-12-01
Full Text Available Accurately quantifying gross primary production (GPP is of vital importance to understanding the global carbon cycle. Light-use efficiency (LUE models and process-based models have been widely used to estimate GPP at different spatial and temporal scales. However, large uncertainties remain in quantifying GPP, especially for croplands. Recently, remote measurements of solar-induced chlorophyll fluorescence (SIF have provided a new perspective to assess actual levels of plant photosynthesis. In the presented study, we evaluated the performance of three approaches, including the LUE-based multi-source data synergized quantitative (MuSyQ GPP algorithm, the process-based boreal ecosystem productivity simulator (BEPS model, and the SIF-based statistical model, in estimating the diurnal courses of GPP at a maize site in Zhangye, China. A field campaign was conducted to acquire synchronous far-red SIF (SIF760 observations and flux tower-based GPP measurements. Our results showed that both SIF760 and GPP were linearly correlated with APAR, and the SIF760-GPP relationship was adequately characterized using a linear function. The evaluation of the modeled GPP against the GPP measured from the tower demonstrated that all three approaches provided reasonable estimates, with R2 values of 0.702, 0.867, and 0.667 and RMSE values of 0.247, 0.153, and 0.236 mg m−2 s−1 for the MuSyQ-GPP, BEPS and SIF models, respectively. This study indicated that the BEPS model simulated the GPP best due to its efficiency in describing the underlying physiological processes of sunlit and shaded leaves. The MuSyQ-GPP model was limited by its simplification of some critical ecological processes and its weakness in characterizing the contribution of shaded leaves. The SIF760-based model demonstrated a relatively limited accuracy but showed its potential in modeling GPP without dependency on climate inputs in short-term studies.
Prediction and tolerance intervals for dynamic treatment regimes.
Lizotte, Daniel J; Tahmasebi, Arezoo
2017-08-01
We develop and evaluate tolerance interval methods for dynamic treatment regimes (DTRs) that can provide more detailed prognostic information to patients who will follow an estimated optimal regime. Although the problem of constructing confidence intervals for DTRs has been extensively studied, prediction and tolerance intervals have received little attention. We begin by reviewing in detail different interval estimation and prediction methods and then adapting them to the DTR setting. We illustrate some of the challenges associated with tolerance interval estimation stemming from the fact that we do not typically have data that were generated from the estimated optimal regime. We give an extensive empirical evaluation of the methods and discussed several practical aspects of method choice, and we present an example application using data from a clinical trial. Finally, we discuss future directions within this important emerging area of DTR research.
Confidence intervals in Flow Forecasting by using artificial neural networks
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input
Energy Technology Data Exchange (ETDEWEB)
Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com [School of Quantitative Sciences, UUM College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)
2015-12-11
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.
Directory of Open Access Journals (Sweden)
John W. Jones
2015-09-01
Full Text Available The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE, is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral
Jones, John W.
2015-01-01
The U.S. Geological Survey is developing new Landsat science products. One, named Dynamic Surface Water Extent (DSWE), is focused on the representation of ground surface inundation as detected in cloud-/shadow-/snow-free pixels for scenes collected over the U.S. and its territories. Characterization of DSWE uncertainty to facilitate its appropriate use in science and resource management is a primary objective. A unique evaluation dataset developed from data made publicly available through the Everglades Depth Estimation Network (EDEN) was used to evaluate one candidate DSWE algorithm that is relatively simple, requires no scene-based calibration data, and is intended to detect inundation in the presence of marshland vegetation. A conceptual model of expected algorithm performance in vegetated wetland environments was postulated, tested and revised. Agreement scores were calculated at the level of scenes and vegetation communities, vegetation index classes, water depths, and individual EDEN gage sites for a variety of temporal aggregations. Landsat Archive cloud cover attribution errors were documented. Cloud cover had some effect on model performance. Error rates increased with vegetation cover. Relatively low error rates for locations of little/no vegetation were unexpectedly dominated by omission errors due to variable substrates and mixed pixel effects. Examined discrepancies between satellite and in situ modeled inundation demonstrated the utility of such comparisons for EDEN database improvement. Importantly, there seems no trend or bias in candidate algorithm performance as a function of time or general hydrologic conditions, an important finding for long-term monitoring. The developed database and knowledge gained from this analysis will be used for improved evaluation of candidate DSWE algorithms as well as other measurements made on Everglades surface inundation, surface water heights and vegetation using radar, lidar and hyperspectral instruments
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Interval Management Display Design Study
Baxley, Brian T.; Beyer, Timothy M.; Cooke, Stuart D.; Grant, Karlus A.
2014-01-01
In 2012, the Federal Aviation Administration (FAA) estimated that U.S. commercial air carriers moved 736.7 million passengers over 822.3 billion revenue-passenger miles. The FAA also forecasts, in that same report, an average annual increase in passenger traffic of 2.2 percent per year for the next 20 years, which approximates to one-and-a-half times the number of today's aircraft operations and passengers by the year 2033. If airspace capacity and throughput remain unchanged, then flight delays will increase, particularly at those airports already operating near or at capacity. Therefore it is critical to create new and improved technologies, communications, and procedures to be used by air traffic controllers and pilots. National Aeronautics and Space Administration (NASA), the FAA, and the aviation industry are working together to improve the efficiency of the National Airspace System and the cost to operate in it in several ways, one of which is through the creation of the Next Generation Air Transportation System (NextGen). NextGen is intended to provide airspace users with more precise information about traffic, routing, and weather, as well as improve the control mechanisms within the air traffic system. NASA's Air Traffic Management Technology Demonstration-1 (ATD-1) Project is designed to contribute to the goals of NextGen, and accomplishes this by integrating three NASA technologies to enable fuel-efficient arrival operations into high-density airports. The three NASA technologies and procedures combined in the ATD-1 concept are advanced arrival scheduling, controller decision support tools, and aircraft avionics to enable multiple time deconflicted and fuel efficient arrival streams in high-density terminal airspace.
Bootstrap confidence intervals for the process capability index under half-logistic distribution
Directory of Open Access Journals (Sweden)
Wararit Panichkitkosolkul
2012-07-01
Full Text Available This study concerns the construction of bootstrap confidence intervals for theprocess capability index in the case of half-logistic distribution. The bootstrap confidence intervals applied consist of standard bootstrap confidence interval, percentile bootstrap confidence interval and bias-corrected percentile bootstrap confidence interval. Using Monte Carlo simulations, the estimated coverage probabilities and average widths ofbootstrap confidence intervals are compared, with results showing that the estimated coverage probabilities of the standard bootstrap confidence interval get closer to the nominal confidence level than those of the other bootstrap confidence intervals for all situations.
Interpregnancy interval and risk of autistic disorder.
Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla
2013-11-01
A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.
Veroustraete, F.; Verstraeten, W. W.
2004-12-01
Carbon emission and -fixation fluxes are key variables to guide climate change stakeholders in the use of remediation techniques as well as in the follow-up of the Kyoto protocol. A common approach to estimate forest carbon fluxes is based on the forest harvest inventory approach. However, harvest and logging inventories have their limitations in time and space. Moreover, carbon inventories are limited to the estimation of net primary productivity (NPP). Additionally, no information is available when applying inventory based methods, on the magnitude of water limitation. Finally, natural forest ecosystems are rarely included in inventory based methods. To develop a Kyoto Protocol policy support tool, a good perspective towards a generalised and methodologically consistent application is offered by expert systems based on satellite remote sensing. They estimate vegetation carbon fixation using a minimum of meteorological inputs and overcome the limitations mentioned for inventory based methods. The core module of a typical expert system is a production efficiency model. In our case we used the C-Fix model. C-Fix estimates carbon mass fluxes e.g, gross primary productivity (GPP), NPP and net ecosystem productivity (NEP) for various spatial scales and regions of interest (ROI's). Besides meteorological inputs, the C-Fix model is fed with data obtained by vegetation RTF (Radiative Transfer Model) inversion. The inversion is based on the use of look-up tables (LUT's). The LUT allows the extraction of per pixel biome type (e.g. forests) frequencies and the value of a biophysical variable and its uncertainty at the pixel level. The extraction by RTF inversion also allows a land cover fuzzy classification based on six major biomes. At the same time fAPAR is extracted and its uncertainty quantified. Based on the biome classification, radiation use efficiencies are stratified according to biome type to be used in C-Fix. Water limitation is incorporated both at the GPP level
Interval Entropy and Informative Distance
Directory of Open Access Journals (Sweden)
Fakhroddin Misagh
2012-03-01
Full Text Available The Shannon interval entropy function as a useful dynamic measure of uncertainty for two sided truncated random variables has been proposed in the literature of reliability. In this paper, we show that interval entropy can uniquely determine the distribution function. Furthermore, we propose a measure of discrepancy between two lifetime distributions at the interval of time in base of Kullback-Leibler discrimination information. We study various properties of this measure, including its connection with residual and past measures of discrepancy and interval entropy, and we obtain its upper and lower bounds.
Estimation of efficiency project management
Directory of Open Access Journals (Sweden)
Novotorov Vladimir Yurevich
2011-03-01
Full Text Available In modern conditions, the effectiveness of the enterprises all in a greater degree depends on methods of management and business dealing forms. The organizations should choose the most effective for themselves strategy of management taking into account the existing legislation, concrete conditions of activity, financial and economic, investment potential and development strategy. Introduction of common system of planning and realization of strategy of the organization, it will allow to provide even development and long-term social and economic growth of the companies.
Directory of Open Access Journals (Sweden)
Ashok Sahai
2016-02-01
Full Text Available This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30. The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’. Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091
Sample Size for the "Z" Test and Its Confidence Interval
Liu, Xiaofeng Steven
2012-01-01
The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)
Lactation yield: Interval level comparison of milk records for genetic ...
African Journals Online (AJOL)
Milk recording intervals was studied by analysing 1220 lactation records of Friesian x Arsi crossbred cows kept in south eastern highlands of Ethiopia. Milk Recording Intervals (MRI) comparison was made at 15, 30 and 45 day's length. Accuracy was measured in terms of percentage difference between actual and estimated ...
Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff
2012-01-01
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
McClaskey, Carolyn M
2017-11-01
Our ability to discriminate between pitch intervals of different sizes is not only an important aspect of speech and music perception, but also a useful means of evaluating higher-level pitch perception. The current study examined how pitch-interval discrimination was affected by the size of the intervals being compared, and by musical training. Using an adaptive procedure, pitch-interval discrimination thresholds were measured for sequentially presented pure-tone intervals with standard intervals of 1 semitone (minor second), 6 semitones (the tri-tone), and 7 semitones (perfect fifth). Listeners were classified into three groups based on musical experience: non-musicians had less than 3 years of informal musical experience, amateur musicians had at least 10 years of experience but no formal music theory training, and expert musicians had at least 12 years of experience with 1 year of formal ear training, and were either currently pursuing or had earned a Bachelor's degree as either a music major or music minor. Consistent with previous studies, discrimination thresholds obtained from expert musicians were significantly lower than those from other listeners. Thresholds also significantly varied with the magnitude of the reference interval and were higher for conditions with a 6- or 7-semitone standard than a 1-semitone standard. These data show that interval-discrimination thresholds are strongly affected by the size of the standard interval. Copyright © 2017 Elsevier B.V. All rights reserved.
Assessment of efficient sampling designs for urban stormwater monitoring.
Leecaster, Molly K; Schiff, Kenneth; Tiefenthaler, Liesl L
2002-03-01
Monitoring programs for urban runoff have not been assessed for effectiveness or efficiency in estimating mass emissions. In order to determine appropriate designs for stormwater, total suspended solids (TSS) and flow information from the Santa Ana River was collected nearly every 15 min for every storm of the 1998 water year. All samples were used to calculate the "true load" and then three within-storm sampling designs (flow-interval, time-interval, and simple random) and five among-storm sampling designs (stratified by size, stratified by season, simple random, simple random of medium and large storms, and the first m storms of the season) were simulated. Using these designs, we evaluated three estimators for storm mass emissions (mean, volume-weighted, and ratio) and three estimators for annual mass emissions (median, ratio, and regular). Designs and estimators were evaluated with respect to accuracy and precision. The optimal strategy was used to determine the appropriate number of storms to sample annually based upon confidence interval width for estimates of annual mass emissions and concentration. The amount of detectable trend in mass emissions and concentration was determined for sample sizes 3 and 7. Single storms were most efficiently characterized (small bias and standard error) by taking 12 samples following a flow-interval schedule and using a volume-weighted estimator of mass emissions. The ratio estimator, when coupled with the simple random sample of medium and large storms within a season, most accurately estimated concentration and mass emissions; and had low bias over all of the designs. Sampling seven storms is the most efficient method for attaining small confidence interval width for annual concentration. Sampling three storms per year allows a 20% trend to be detected in mass emissions or concentration over five years. These results are decreased by 10% by sampling seven storms per year.
Rosa, Filipa; Sales, Kevin C; Cunha, Bernardo R; Couto, Andreia; Lopes, Marta B; Calado, Cecília R C
2015-10-01
Reporter genes are routinely used in every laboratory for molecular and cellular biology for studying heterologous gene expression and general cellular biological mechanisms, such as transfection processes. Although well characterized and broadly implemented, reporter genes present serious limitations, either by involving time-consuming procedures or by presenting possible side effects on the expression of the heterologous gene or even in the general cellular metabolism. Fourier transform mid-infrared (FT-MIR) spectroscopy was evaluated to simultaneously analyze in a rapid (minutes) and high-throughput mode (using 96-wells microplates), the transfection efficiency, and the effect of the transfection process on the host cell biochemical composition and metabolism. Semi-adherent HEK and adherent AGS cell lines, transfected with the plasmid pVAX-GFP using Lipofectamine, were used as model systems. Good partial least squares (PLS) models were built to estimate the transfection efficiency, either considering each cell line independently (R (2) ≥ 0.92; RMSECV ≤ 2 %) or simultaneously considering both cell lines (R (2) = 0.90; RMSECV = 2 %). Additionally, the effect of the transfection process on the HEK cell biochemical and metabolic features could be evaluated directly from the FT-IR spectra. Due to the high sensitivity of the technique, it was also possible to discriminate the effect of the transfection process from the transfection reagent on KEK cells, e.g., by the analysis of spectral biomarkers and biochemical and metabolic features. The present results are far beyond what any reporter gene assay or other specific probe can offer for these purposes.
Mathematical Properties on the Hyperbolicity of Interval Graphs
Directory of Open Access Journals (Sweden)
Juan C. Hernández-Gómez
2017-11-01
Full Text Available Gromov hyperbolicity is an interesting geometric property, and so it is natural to study it in the context of geometric graphs. In particular, we are interested in interval and indifference graphs, which are important classes of intersection and Euclidean graphs, respectively. Interval graphs (with a very weak hypothesis and indifference graphs are hyperbolic. In this paper, we give a sharp bound for their hyperbolicity constants. The main result in this paper is the study of the hyperbolicity constant of every interval graph with edges of length 1. Moreover, we obtain sharp estimates for the hyperbolicity constant of the complement of any interval graph with edges of length 1.
Using Confidence Intervals for Assessing Reliability of Real Tests.
Oosterwijk, Pieter R; van der Ark, L Andries; Sijtsma, Klaas
2017-10-01
Test authors report sample reliability values but rarely consider the sampling error and related confidence intervals. This study investigated the truth of this conjecture for 116 tests with 1,024 reliability estimates (105 pertaining to test batteries and 919 to tests measuring a single attribute) obtained from an online database. Based on 90% confidence intervals, approximately 20% of the initial quality assessments had to be downgraded. For 95% confidence intervals, the percentage was approximately 23%. The results demonstrated that reported reliability values cannot be trusted without considering their estimation precision.
Zhang, Q.; Middleton, E.; Margolis, H.; Drolet, G.; Barr, A.; Black, T.
2008-12-01
We used daily MODIS imagery obtained over 2001-2005 to analyze the seasonal and interannual photosynthetic light use efficiency (LUE) of the Southern Old Aspen (SOA) flux tower site located near the southern limit of the boreal forest in Saskatchewan, Canada. This forest stand extends for at least 3 km in all directions from the flux tower. The MODIS daily reflectance products have resolution of 500 m at nadir and > 500 m at off-nadir. To obtain the spectral characteristics of a standardized land area to compare with tower measurements, we scaled up the nominal 500 m MODIS products to an area of 2.5 km × 2.5 km (5×5 MODIS 500 m grid cells). We then used the 5×5 scaled-up MODIS products in a coupled canopy-leaf radiative transfer model, PROSAIL-2, to estimate the fraction of photosynthetically active radiation (PAR) absorbed by the photosynthetically active part of the canopy dominated by chlorophyll (FAPARchl) versus that absorbed by the whole canopy (FAPARcanopy). From the tower measurements, we determined 90-minute averages for APAR and LUE for the physiologically active foliage (APARchl, LUEchl) and for the entire canopy (APARcanopy, LUEcanopy). The flux tower measurements of GEP were strongly related to the MODIS-derived estimates of APARchl (r2 = 0.78) but weakly related to APARcanopy (r2 = 0.33). Gross LUE (slope of GEP:APAR) between 2001 and 2005 for LUEchl was 0.0241 μ mol C μ mol -1 PPFD whereas LUEcanopy was 36% lower. Inter-annual variability in growing season (DOY 152-259) LUEchl (μ mol C μ mol -1 PPFD) ranged from 0.0225 in 2003 to 0.0310 in 2004. The five year time series of growing season LUEchl corresponded well with both the seasonal phase and amplitude of LUE from the tower measurements. We conclude that LUEchl derived from MODIS observations could provide a useful input to land surface models for improved estimates of ecosystem carbon dynamics.
Exact confidence intervals for channelized Hotelling observer performance
Wunderlich, Adam; Noo, Frederic; Heilbrun, Marta
2013-03-01
Task-based assessments of image quality constitute a rigorous, principled approach to the evaluation of imaging system performance. To conduct such assessments, it has been recognized that mathematical model observers are very useful, particularly for purposes of imaging system development and optimization. One type of model observer that has been widely applied in the medical imaging community is the channelized Hotelling observer (CHO). In the present work, we address the need for reliable confidence interval estimators of CHO performance. Specifically, we observe that a procedure proposed by Reiser for interval estimation of the Mahalanobis distance can be applied to obtain confidence intervals for CHO performance. In addition, we find that these intervals are well-defined with theoretically-exact coverage probabilities, which is a new result not proved by Reiser. The confidence intervals are tested with Monte Carlo simulation and demonstrated with an example comparing x-ray CT reconstruction strategies.
Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François
2016-10-01
Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance. © 2016 INRA New Phytologist © 2016 New Phytologist Trust.
IBM system/360 assembly language interval arithmetic software
Phillips, E. J.
1972-01-01
Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.
Risk prediction of cardiovascular death based on the QTc interval
DEFF Research Database (Denmark)
Nielsen, Jonas B; Graff, Claus; Rasmussen, Peter V
2014-01-01
interval resulted in the worst prognosis for men whereas in women, a very short QTc interval was equivalent in risk to a borderline prolonged QTc interval. The effect of the QTc interval on the absolute risk of CVD was most pronounced in the elderly and in those with cardiovascular disease whereas.......1 years, 6647 persons died from cardiovascular causes. Long-term risks of CVD were estimated for subgroups defined by age, gender, cardiovascular disease, and QTc interval categories. In general, we observed an increased risk of CVD for both very short and long QTc intervals. Prolongation of the QTc...... the effect was negligible for middle-aged women without cardiovascular disease. The most important improvement in prediction accuracy was noted for women aged 70-90 years. In this subgroup, a total of 9.5% were reclassified (7.2% more accurately vs. 2.3% more inaccurately) within clinically relevant 5-year...
Haematological reference intervals in a multiethnic population.
Ambayya, Angeli; Su, Anselm Ting; Osman, Nadila Haryani; Nik-Samsudin, Nik Rosnita; Khalid, Khadijah; Chang, Kian Meng; Sathar, Jameela; Rajasuriar, Jay Suriar; Yegappan, Subramanian
2014-01-01
Similar to other populations, full blood count reference (FBC) intervals in Malaysia are generally derived from non-Malaysian subjects. However, numerous studies have shown significant differences between and within populations supporting the need for population specific intervals. Two thousand seven hundred twenty five apparently healthy adults comprising all ages, both genders and three principal races were recruited through voluntary participation. FBC was performed on two analysers, Sysmex XE-5000 and Unicel DxH 800, in addition to blood smears and haemoglobin analysis. Serum ferritin, soluble transferrin receptor and C-reactive protein assays were performed in selected subjects. All parameters of qualified subjects were tested for normality followed by determination of reference intervals, measures of central tendency and dispersion along with point estimates for each subgroup. Complete data was available in 2440 subjects of whom 56% (907 women and 469 men) were included in reference interval calculation. Compared to other populations there were significant differences for haemoglobin, red blood cell count, platelet count and haematocrit in Malaysians. There were differences between men and women, and between younger and older men; unlike in other populations, haemoglobin was similar in younger and older women. However ethnicity and smoking had little impact. 70% of anemia in premenopausal women, 24% in postmenopausal women and 20% of males is attributable to iron deficiency. There was excellent correlation between Sysmex XE-5000 and Unicel DxH 800. Our data confirms the importance of population specific haematological parameters and supports the need for local guidelines rather than adoption of generalised reference intervals and cut-offs.
Haematological reference intervals in a multiethnic population.
Directory of Open Access Journals (Sweden)
Angeli Ambayya
Full Text Available INTRODUCTION: Similar to other populations, full blood count reference (FBC intervals in Malaysia are generally derived from non-Malaysian subjects. However, numerous studies have shown significant differences between and within populations supporting the need for population specific intervals. METHODS: Two thousand seven hundred twenty five apparently healthy adults comprising all ages, both genders and three principal races were recruited through voluntary participation. FBC was performed on two analysers, Sysmex XE-5000 and Unicel DxH 800, in addition to blood smears and haemoglobin analysis. Serum ferritin, soluble transferrin receptor and C-reactive protein assays were performed in selected subjects. All parameters of qualified subjects were tested for normality followed by determination of reference intervals, measures of central tendency and dispersion along with point estimates for each subgroup. RESULTS: Complete data was available in 2440 subjects of whom 56% (907 women and 469 men were included in reference interval calculation. Compared to other populations there were significant differences for haemoglobin, red blood cell count, platelet count and haematocrit in Malaysians. There were differences between men and women, and between younger and older men; unlike in other populations, haemoglobin was similar in younger and older women. However ethnicity and smoking had little impact. 70% of anemia in premenopausal women, 24% in postmenopausal women and 20% of males is attributable to iron deficiency. There was excellent correlation between Sysmex XE-5000 and Unicel DxH 800. CONCLUSION: Our data confirms the importance of population specific haematological parameters and supports the need for local guidelines rather than adoption of generalised reference intervals and cut-offs.
Fusing photovoltaic data for improved confidence intervals
Directory of Open Access Journals (Sweden)
Ansgar Steland
2017-01-01
Full Text Available Characterizing and testing photovoltaic modules requires carefully made measurements on important variables such as the power output under standard conditions. When additional data is available, which has been collected using a different measurement system and therefore may be of different accuracy, the question arises how one can combine the information present in both data sets. In some cases one even has prior knowledge about the ordering of the variances of the measurement errors, which is not fully taken into account by commonly known estimators. We discuss several statistical estimators to combine the sample means of independent series of measurements, both under the assumption of heterogeneous variances and ordered variances. The critical issue is then to assess the estimator’s variance and to construct confidence intervals. We propose and discuss the application of a new jackknife variance estimator devised by [1] to such photovoltaic data, in order to assess the variability of common mean estimation under heterogeneous and ordered variances in a reliable and nonparametric way. When serial correlations are present, which usually a ect the marginal variances, it is proposed to construct a thinned data set by downsampling the series in such a way that autocorrelations are removed or dampened. We propose a data adaptive procedure which downsamples a series at irregularly spaced time points in such a way that the autocorrelations are minimized. The procedures are illustrated by applying them to real photovoltaic power output measurements from two different sun light flashers. In addition, focusing on simulations governed by real photovoltaic data, we investigate the accuracy of the jackknife approach and compare it with other approaches. Among those is a variance estimator based on Nair’s formula for Gaussian data and, as a parametric alternative, two Bayesian models. We investigate the statistical accuracy of the resulting confidence
Estimation of sustained peak yield interval of dairy cattle lactation ...
African Journals Online (AJOL)
PC
UX. L. Y. > −. +. −. +. +. = <. <. −. +. +. = <. +. = }. (2) where: L is a constant, U is the slope of the line until the first breakpoint, R1 is the first breakpoint, V is the slope of the line between the first and second breakpoints, R2 is the second breakpoint, and W is the slope of the line after the second breakpoint (Vito, 2003).
Constructing seasonally adjusted data with time-varying confidence intervals
S.J. Koopman (Siem Jan); Ph.H.B.F. Franses (Philip Hans)
2001-01-01
textabstractSeasonal adjustment methods transform observed time series data into estimated data, where these estimated data are constructed such that they show no or almost no seasonal variation. An advantage of model-based methods is that these can provide confidence intervals around the seasonally
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Likelihood-based confidence intervals in exploratory factor analysis
Oort, F.J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated
Fuzzy efficiency without convexity
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Balezentis, Tomas
2014-01-01
approach builds directly upon the definition of Farrell's indexes of technical efficiency used in crisp FDH. Therefore we do not require the use of fuzzy programming techniques but only utilize ranking probabilities of intervals as well as a related definition of dominance between pairs of intervals. We...