WorldWideScience

Sample records for biased parameter estimates

  1. Adaptive Unified Biased Estimators of Parameters in Linear Model

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Li-xing Zhu

    2004-01-01

    To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.

  2. BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Xu Benlian; Wang Zhiquan

    2007-01-01

    According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.

  3. Bootstrap Co-integration Rank Testing: The Effect of Bias-Correcting Parameter Estimates

    OpenAIRE

    Cavaliere, Giuseppe; Taylor, A. M. Robert; Trenkler, Carsten

    2013-01-01

    In this paper we investigate bootstrap-based methods for bias-correcting the first-stage parameter estimates used in some recently developed bootstrap implementations of the co-integration rank tests of Johansen (1996). In order to do so we adapt the framework of Kilian (1998) which estimates the bias in the original parameter estimates using the average bias in the corresponding parameter esti- mates taken across a large number of auxiliary bootstrap replications. A number of possible imp...

  4. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L.F. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore); Xie, M. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Tang, L.C. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)

    2006-08-15

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent.

  5. Model parameter estimation bias induced by earthquake magnitude cut-off

    Science.gov (United States)

    Harte, D. S.

    2016-02-01

    We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.

  6. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  7. Parameter Estimation with BEAMS in the presence of biases and correlations

    CERN Document Server

    Newling, James; Hlozek, Renée; Kunz, Martin; Smith, Mathew; Varughese, Melvin

    2011-01-01

    The original formulation of BEAMS - Bayesian Estimation Applied to Multiple Species - showed how to use a dataset contaminated by points of multiple underlying types to perform unbiased parameter estimation. An example is cosmological parameter estimation from a photometric supernova sample contaminated by unknown Type Ibc and II supernovae. Where other methods require data cuts to increase purity, BEAMS uses all of the data points in conjunction with their probabilities of being each type. Here we extend the BEAMS formalism to allow for correlations between the data and the type probabilities of the objects as can occur in realistic cases. We show with simple simulations that this extension can be crucial, providing a 50% reduction in parameter estimation variance when such correlations do exist. We then go on to perform tests to quantify the importance of the type probabilities, one of which illustrates the effect of biasing the probabilities in various ways. Finally, a general presentation of the selection...

  8. Estimating and assessing Galileo navigation system satellite and receiver differential code biases using the ionospheric parameter and differential code bias joint estimation approach with multi-GNSS observations

    Science.gov (United States)

    Xue, Junchen; Song, Shuli; Liao, Xinhao; Zhu, Wenyao

    2016-04-01

    With the increased number of Galileo navigation satellites joining the Global Navigation Satellite Systems (GNSS) service, there is a strong need for estimating their differential code biases (DCBs) for high-precision GNSS applications. There have been studies for estimating DCBs based on an external global ionospheric model (GIM) proposed by Montenbruck et al. (2014). In this study, we take a different approach by joining the construction of a GIM and estimating DCB together with multi-GNSS observations, including GPS, the BeiDou navigation system, and the Galileo navigation system (GAL). This approach takes full advantage of the collective strength of the individual systems while maintaining high solution consistency. Daily GAL DCBs were estimated simultaneously with ionospheric model parameters from 3 months' multi-GNSS observations. The stability of the resulting GAL DCB estimates was analyzed in detail. It was found that the standard deviations (STDs) of all satellite DCBs were less than 0.17 ns. For GAL receivers, the STDs were greater than for the satellites, with most values STD between 28 and 7 day intervals was small, with the maximum not exceeding 0.01 ns. In almost all cases, the difference in GAL satellite DCBs between two consecutive days was <0.8 ns. The main conclusion is that based on the stability of the GAL DCBs, only occasional calibration is required. Furthermore, the 30 day-averaged satellite DCBs may satisfy the requirement of high-precision applications depending on the GAL satellite DCBs.

  9. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  10. Estimating Cosmological Parameter Covariance

    CERN Document Server

    Taylor, Andy

    2014-01-01

    We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...

  11. Bias and Systematic Change in the Parameter Estimates of Macro-Level Diffusion Models

    OpenAIRE

    Christophe Van den Bulte; Lilien, Gary L.

    1997-01-01

    Studies estimating the Bass model and other macro-level diffusion models with an unknown ceiling feature three curious empirical regularities: (i) the estimated ceiling is often close to the cumulative number of adopters in the last observation period, (ii) the estimated coefficient of social contagion or imitation tends to decrease as one adds later observations to the data set, and (iii) the estimated coefficient of social contagion or imitation tends to decrease systematically as the estim...

  12. Correcting cosmological parameter biases for all redshift surveys induced by estimating and reweighting redshift distributions

    CERN Document Server

    Rau, Markus Michael; Paech, Kerstin; Seitz, Stella

    2016-01-01

    Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by common suboptimal binning techniques and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to $6 \\, \\sigma$ for the dark energy equation of state parameter $w$. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalised resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surve...

  13. How serious can the stealth bias be in gravitational wave parameter estimation?

    CERN Document Server

    Vitale, Salvatore

    2013-01-01

    The upcoming direct detection of gravitational waves will open a window to probing the strong-field regime of general relativity (GR). As a consequence, waveforms that include the presence of deviations from GR have been developed (e.g. in the parametrized post-Einsteinian approach). TIGER, a data analysis pipeline which builds Bayesian evidence to support or question the validity of GR, has been written and tested. In particular, it was shown recently that data from the LIGO and Virgo detectors will allow to detect deviations from GR smaller than can be probed with Solar System tests and pulsar timing measurements or not accessible with conventional tests of GR. However, evidence from several detections is required before a deviation from GR can be confidently claimed. An interesting consequence is that, should GR not be the correct theory of gravity in its strong field regime, using standard GR templates for the matched filter analysis of interferometer data will introduce biases in the gravitational wave m...

  14. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J;

    2007-01-01

    selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D....... melanogaster lineage has experienced a reduction in the selection for optimal codon usage. However, the D. melanogaster lineage has also experienced a change in the biological mutation rates relative to D. simulans, in particular, a relative reduction in the mutation rate from A to G and an increase in the...... mutation rate from C to T. However, neither a reduction in the strength of selection nor a change in the mutational pattern can alone explain all of the data observed in the D. melanogaster lineage. For example, we also confirm previous results showing that the Notch locus has experienced positive...

  15. Bootstrap bias-adjusted GMM estimators

    OpenAIRE

    Ramalho, Joaquim J.S.

    2005-01-01

    The ability of six alternative bootstrap methods to reduce the bias of GMM parameter estimates is examined in an instrumental variable framework using Monte Carlo analysis. Promising results were found for the two bootstrap estimators suggested in the paper.

  16. An estimation of the height system bias parameter N (0) using least squares collocation from observed gravity and GPS-levelling data

    DEFF Research Database (Denmark)

    Sadiq, Muhammad; Tscherning, Carl C.; Ahmad, Zulfiqar

    2009-01-01

    This paper deals with the analysis of gravity anomaly and precise levelling in conjunction with GPS-Levelling data for the computation of a gravimetric geoid and an estimate of the height system bias parameter N-o for the vertical datum in Pakistan by means of least squares collocation technique....... The long term objective is to obtain a regional geoid (or quasi-geoid) modeling using a combination of local data with a high degree and order Earth gravity model (EGM) and to determine a bias (if there is one) with respect to a global mean sea surface. An application of collocation with the optimal...... covariance parameters has facilitated to achieve gravimetric height anomalies in a global geocentric datum. Residual terrain modeling (RTM) technique has been used in combination with the EGM96 for the reduction and smoothing of the gravity data. A value for the bias parameter N-o has been estimated...

  17. The estimation method of GPS instrumental biases

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.

  18. Spatial Bias in Field-Estimated Unsaturated Hydraulic Properties

    Energy Technology Data Exchange (ETDEWEB)

    HOLT,ROBERT M.; WILSON,JOHN L.; GLASS JR.,ROBERT J.

    2000-12-21

    Hydraulic property measurements often rely on non-linear inversion models whose errors vary between samples. In non-linear physical measurement systems, bias can be directly quantified and removed using calibration standards. In hydrologic systems, field calibration is often infeasible and bias must be quantified indirectly. We use a Monte Carlo error analysis to indirectly quantify spatial bias in the saturated hydraulic conductivity, K{sub s}, and the exponential relative permeability parameter, {alpha}, estimated using a tension infiltrometer. Two types of observation error are considered, along with one inversion-model error resulting from poor contact between the instrument and the medium. Estimates of spatial statistics, including the mean, variance, and variogram-model parameters, show significant bias across a parameter space representative of poorly- to well-sorted silty sand to very coarse sand. When only observation errors are present, spatial statistics for both parameters are best estimated in materials with high hydraulic conductivity, like very coarse sand. When simple contact errors are included, the nature of the bias changes dramatically. Spatial statistics are poorly estimated, even in highly conductive materials. Conditions that permit accurate estimation of the statistics for one of the parameters prevent accurate estimation for the other; accurate regions for the two parameters do not overlap in parameter space. False cross-correlation between estimated parameters is created because estimates of K{sub s} also depend on estimates of {alpha} and both parameters are estimated from the same data.

  19. A prescription for galaxy biasing evolution as a nuisance parameter

    Science.gov (United States)

    Clerkin, L.; Kirk, D.; Lahav, O.; Abdalla, F. B.; Gaztañaga, E.

    2015-04-01

    There is currently no consistent approach to modelling galaxy bias evolution in cosmological inference. This lack of a common standard makes the rigorous comparison or combination of probes difficult. We show that the choice of biasing model has a significant impact on cosmological parameter constraints for a survey such as the Dark Energy Survey (DES), considering the two-point correlations of galaxies in five tomographic redshift bins. We find that modelling galaxy bias with a free biasing parameter per redshift bin gives a Figure of Merit (FoM) for dark energy equation of state parameters w0, wa smaller by a factor of 10 than if a constant bias is assumed. An incorrect bias model will also cause a shift in measured values of cosmological parameters. Motivated by these points and focusing on the redshift evolution of linear bias, we propose the use of a generalized galaxy bias which encompasses a range of bias models from theory, observations and simulations, b(z) = c + (b0 - c)/D(z)α, where parameters c, b0 and α depend on galaxy properties such as halo mass. For a DES-like galaxy survey, we find that this model gives an unbiased estimate of w0, wa with the same number or fewer nuisance parameters and a higher FoM than a simple b(z) model allowed to vary in z-bins. We show how the parameters of this model are correlated with cosmological parameters. We fit a range of bias models to two recent data sets, and conclude that this generalized parametrization is a sensible benchmark expression of galaxy bias on large scales.

  20. Statistical framework for estimating GNSS bias

    CERN Document Server

    Vierinen, Juha; Rideout, William C; Erickson, Philip J; Norberg, Johannes

    2015-01-01

    We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line integrated electron densities (TEC) that are scaled to equivalent vertical integrated densities. The spatio-temporal variability, instrumentation dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual frequency GNSS s...

  1. Parameter Estimation Through Ignorance

    CERN Document Server

    Du, Hailiang

    2015-01-01

    Dynamical modelling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A new relatively simple method of parameter estimation for nonlinear systems is presented, based on variations in the accuracy of probability forecasts. It is illustrated on the Logistic Map, the Henon Map and the 12-D Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The new method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This new approach is easier to implement in practice than alter...

  2. Revisiting Cosmological parameter estimation

    CERN Document Server

    Prasad, Jayanti

    2014-01-01

    Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...

  3. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  4. Elimination of Estimation biases in the Software Development

    Directory of Open Access Journals (Sweden)

    Thamarai . I.

    2015-04-01

    Full Text Available The software effort estimations are usually too low and the prediction is also a very difficult task as software is intangible in nature. Also the estimation is based on the parameters that are usually partial in nature. It is an important management activity. Despite much research in this area, the accuracy of effort estimation is very low. This results in poor project planning and failure of many software projects. One of the reasons for this poor estimation is that the estimation given by the software developers are affected by some information which do not have any relevance to the calculation of effort. To avoid this, we have proposed a new methodology in which we analyze the relationship between the estimation bias and the various features of developers such as the role in the company, thinking style, experience, education, software development skills, etc. It is found that the estimation bias increases with higher levels of interdependence.

  5. Simultaneous quaternion estimation (QUEST) and bias determination

    Science.gov (United States)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  6. Bias Correction for Alternating Iterative Maximum Likelihood Estimators

    Institute of Scientific and Technical Information of China (English)

    Gang YU; Wei GAO; Ningzhong SHI

    2013-01-01

    In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.

  7. Estimating Ancestral Population Parameters

    OpenAIRE

    Wakeley, J.; Hey, J.

    1997-01-01

    The expected numbers of different categories of polymorphic sites are derived for two related models of population history: the isolation model, in which an ancestral population splits into two descendents, and the size-change model, in which a single population undergoes an instantaneous change in size. For the isolation model, the observed numbers of shared, fixed, and exclusive polymorphic sites are used to estimate the relative sizes of the three populations, ancestral plus two descendent...

  8. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  9. Noise Induces Biased Estimation of the Correction Gain.

    Directory of Open Access Journals (Sweden)

    Jooeun Ahn

    Full Text Available The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis.

  10. Noise Induces Biased Estimation of the Correction Gain

    Science.gov (United States)

    Ahn, Jooeun; Zhang, Zhaoran; Sternad, Dagmar

    2016-01-01

    The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker) and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis. PMID:27463809

  11. PARAMETER ESTIMATION OF EXPONENTIAL DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    XU Haiyan; FEI Heliang

    2005-01-01

    Because of the importance of grouped data, many scholars have been devoted to the study of this kind of data. But, few documents have been concerned with the threshold parameter. In this paper, we assume that the threshold parameter is smaller than the first observing point. Then, on the basis of the two-parameter exponential distribution, the maximum likelihood estimations of both parameters are given, the sufficient and necessary conditions for their existence and uniqueness are argued, and the asymptotic properties of the estimations are also presented, according to which approximate confidence intervals of the parameters are derived. At the same time, the estimation of the parameters is generalized, and some methods are introduced to get explicit expressions of these generalized estimations. Also, a special case where the first failure time of the units is observed is considered.

  12. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  13. A Prescription for Galaxy Biasing Evolution as a Nuisance Parameter

    CERN Document Server

    Clerkin, L; Lahav, O; Abdalla, F B; Gaztanaga, E

    2014-01-01

    There is currently no consistent approach to modelling galaxy bias evolution in cosmological inference. This lack of a common standard makes the rigorous comparison or combination of probes difficult. We show that the choice of biasing model has a significant impact on cosmological parameter constraints for a survey such as the Dark Energy Survey (DES), considering the 2-point correlations of galaxies in five tomographic redshift bins. We find that modelling galaxy bias with a free biasing parameter per redshift bin gives a Figure of Merit (FoM) for Dark Energy equation of state parameters $w_0, w_a$ smaller by a factor of 10 than if a constant bias is assumed. An incorrect bias model will also cause a shift in measured values of cosmological parameters. Motivated by these points and focusing on the redshift evolution of linear bias, we propose the use of a generalised galaxy bias which encompasses a range of bias models from theory, observations and simulations, $b(z) = c + (b_0 - c)/D(z)^\\alpha$, where $c, ...

  14. Decentralized target geolocation for unmanned aerial vehicle with sensor bias estimation

    Science.gov (United States)

    Baek, Kwangyul; Bang, Hyochoong

    2012-11-01

    This paper deals with the decentralized approach of target geolocation and sensor bias estimation for multiple unmanned aerial vehicles with bearing angle sensors. The bias of bearing sensor is crucial source that impoverish accuracy of target geolocation. The decentralized estimation approach utilizes the information filtering and dual estimation. The local estimator running in each vehicle estimates the target motion and its sensor bias simultaneously in dual estimation framework. The dual estimation consists of two parallel filters, which are the state filter for target motion and the parameter filter for sensor bias. The information increments of target motion in local vehicles are shared with other vehicles in information filtering framework which is suitable for multiple sensor estimation than conventional Kalman filter. Performance comparison of the proposed decentralized geolocation algorithm with bias estimation with the centralized approaches is demonstrated by numerical simulation.

  15. Parameter estimation in food science.

    Science.gov (United States)

    Dolan, Kirk D; Mishra, Dharmendra K

    2013-01-01

    Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.

  16. A Polynomial Prediction Filter Method for Estimating Multisensor Dynamically Varying Biases

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The estimation of the sensor measurement biases in a multisensor system is vital for the sensor data fusion. A solution is provided for the estimation of dynamically varying multiple sensor biases without any knowledge of the dynamic bias model parameters. It is shown that the sensor bias pseudomeasurement can be dynamically obtained via a parity vector. This is accomplished by multiplying the sensor uncalibrated measurement equations by a projection matrix so that the measured variable is eliminated from the equations. Once the state equations of the dynamically varying sensor biases are modeled by a polynomial prediction filter, the dynamically varying multisensor biases can be obtained by Kalman filter. Simulation results validate that the proposed method can estimate the constant biases and dynamic biases of multisensors and outperforms the methods reported in literature.

  17. Parameters estimation in quantum optics

    CERN Document Server

    D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.

    2000-01-01

    We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.

  18. The empirical biases and mean square errors of estimators for the seasonal model parameter%季节模型参数估计量的经验偏差和均方误差

    Institute of Scientific and Technical Information of China (English)

    金兰; 柳京爱; 马福顺

    2000-01-01

    由Monte Carlo研究得到季节模型参数的通常的最小方差估计量、简单对称估计量和加权对称估计量的经验偏差和均方误差以及最小方差估计量对简单对称估计量和加权对称估计量的比率.结果表明对非平稳序列最小方差估计量比其它两个估计量更有效且较小季节值的有效性也较小,但当平稳序列参数的绝对值接近1时,加权对称估计量和简单对称估计量比最小方差估计量更有效且较大季节值的有效性也较大.%Empirical biases and mean square errors of the ordinary least squares estimator,the weighted symmetric estimator and the simple symmetric estimator of seasonal model parameter are presented by Monte Carlo simulation study.

  19. Toward unbiased estimations of the statefinder parameters

    CERN Document Server

    Aviles, Alejandro; Luongo, Orlando

    2016-01-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cos...

  20. Variance and bias confidence criteria for ERA modal parameter identification. [Eigensystem Realization Algorithm

    Science.gov (United States)

    Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan

    1988-01-01

    For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.

  1. Sensitivity of hydrologic simulations to bias corrected driving parameters

    Science.gov (United States)

    Papadimitriou, Lamprini; Grillakis, Manolis; Koutroulis, Aristeidis; Tsanis, Ioannis

    2016-04-01

    Climate model outputs feature systematic errors and biases that render them unsuitable for direct use by the impact models. To deal with this issue many bias correction techniques have been developed to adjust the modelled variables against observations. For the most common applications adjustment concerns only precipitation and temperature whilst for others all the driving parameters (including radiation, wind speed, humidity, air pressure) are bias adjusted. Bias adjusting only part of the variables required as biophysical model input could affect the physical consistency among input variables and is poorly studied. It is important to determine and quantify the effect that bias adjusting each climate variable has on the impact model's simulation and identify parameters that could be treated as raw outputs for specific model applications. In this work, the sensitivity of climate simulations to bias adjusted driving parameters is tested by conducting a series of model runs, for which the impact model JULES is forced with: i) not bias corrected input variables, ii) all bias corrected input variables, iii-viii) all input variables bias corrected except for: iii) precipitation, iv) temperature, v) radiation, vi) specific humidity, vii) air pressure and viii) wind speed. This set of runs is conducted for three climate models of different equilibrium climate sensitivity: GFDL-ESM2M, MIROC-ESM-CHEM and IPSL-CM5A-LR. The baseline for the comparison of the experimental runs is a JULES run forced with the WFDEI dataset, the dataset that was used as the observational dataset for adjusting biases. The comparative analysis is performed using the time period 1981-2010 and focusing on output variables of the hydrological cycle (runoff, evapotranspiration, soil moisture).

  2. Estimating and Correcting Bias in Stereo Visual Odometry

    Science.gov (United States)

    Farboud-Sheshdeh, Sara

    Stereo visual odometry (VO) is a common technique for estimating a camera's motion; features are tracked across frames and the pose change is subsequently inferred. This method can play a particularly important role in environments where the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled; this can cause the resulting estimate to become highly inaccurate. In this thesis, two effects are identified at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous observations. To estimate the bias, the sigma-point method (with modification) combined with the concept of bootstrap bias estimation is proposed. This novel method achieves similar accuracy to Monte Carlo experiments, but at a fraction of the computational cost. The approach is validated through simulations.

  3. Photo-z Estimation: An Example of Nonparametric Conditional Density Estimation under Selection Bias

    CERN Document Server

    Izbicki, Rafael; Freeman, Peter E

    2016-01-01

    Redshift is a key quantity for inferring cosmological model parameters. In photometric redshift estimation, cosmologists use the coarse data collected from the vast majority of galaxies to predict the redshift of individual galaxies. To properly quantify the uncertainty in the predictions, however, one needs to go beyond standard regression and instead estimate the full conditional density f(z|x) of a galaxy's redshift z given its photometric covariates x. The problem is further complicated by selection bias: usually only the rarest and brightest galaxies have known redshifts, and these galaxies have characteristics and measured covariates that do not necessarily match those of more numerous and dimmer galaxies of unknown redshift. Unfortunately, there is not much research on how to best estimate complex multivariate densities in such settings. Here we describe a general framework for properly constructing and assessing nonparametric conditional density estimators under selection bias, and for combining two o...

  4. Bayesian parameter estimation for effective field theories

    CERN Document Server

    Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A

    2015-01-01

    We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  5. Bayesian parameter estimation for effective field theories

    Science.gov (United States)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  6. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  7. Parameter Estimation Using VLA Data

    Science.gov (United States)

    Venter, Willem C.

    The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters

  8. Recursive bias estimation and L2 boosting

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nicolas W [Los Alamos National Laboratory; Cornillon, Pierre - Andre [INRA, FRANCE; Matzner - Lober, Eric [RENNE, FRANCE

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  9. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  10. Uniform bias study and Bahadur representation for local polynomial estimators of the conditional quantile function

    OpenAIRE

    Guerre, Emmanuel; Sabbah, Camille

    2011-01-01

    This paper investigates the bias and the weak Bahadur representation of a local polynomial estimator of the conditional quantile function and its derivatives. The bias and Bahadur remainder term are studied uniformly with respect to the quantile level, the covariates and the smoothing parameter. The order of the local polynomial estimator can be higher than the differentiability order of the conditional quantile function. Applications of the results deal with global optimal consistency rates ...

  11. Bias-corrected estimation of stable tail dependence function

    DEFF Research Database (Denmark)

    Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri;

    2016-01-01

    We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where a...

  12. Estimation of Synchronous Machine Parameters

    Directory of Open Access Journals (Sweden)

    Oddvar Hallingstad

    1980-01-01

    Full Text Available The present paper gives a short description of an interactive estimation program based on the maximum likelihood (ML method. The program may also perform identifiability analysis by calculating sensitivity functions and the Hessian matrix. For the short circuit test the ML method is able to estimate the q-axis subtransient reactance x''q, which is not possible by means of the conventional graphical method (another set of measurements has to be used. By means of the synchronization and close test, the ML program can estimate the inertial constant (M, the d-axis transient open circuit time constant (T'do, the d-axis subtransient o.c.t.c (T''do and the q-axis subtransient o.c.t.c (T''qo. In particular, T''qo is difficult to estimate by any of the methods at present in use. Parameter identifiability is thoroughly examined both analytically and by numerical methods. Measurements from a small laboratory machine are used.

  13. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2011-01-01

    Parameter Estimation and Inverse Problems, 2e provides geoscience students and professionals with answers to common questions like how one can derive a physical model from a finite set of observations containing errors, and how one may determine the quality of such a model. This book takes on these fundamental and challenging problems, introducing students and professionals to the broad range of approaches that lie in the realm of inverse theory. The authors present both the underlying theory and practical algorithms for solving inverse problems. The authors' treatment is approp

  14. Estimation and adjustment of self-selection bias in volunteer panel web surveys

    Science.gov (United States)

    Niu, Chengying

    2016-06-01

    By using propensity score matching method of random sample, we matched simple random sample units and volunteer panel Web survey sample units based on the equal or similar propensity score. The unbiased estimators of the population parameters are constructed by using the matching simple random sample, and the self-selection bias is estimated. We propose propensity score weighted and matching sample post stratification weighted methods to estimate the population parameters, and the self-selection bias in volunteer panel Web Surveys are adjusted.

  15. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    Science.gov (United States)

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  16. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  17. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  18. Attitude and gyro bias estimation for a VTOL UAV

    OpenAIRE

    METNI, N; PFLIMLIN, JM; Hamel, T.; SOUERES, P

    2006-01-01

    In this paper, a nonlinear complementary filter (x-estimator) is presented to estimate the attitude of a vertical take off and landing unmanned aerial vehicle (VTOL UAV). The measurements are taken from a low-cost IMU (inertial measurement unit) which consists of 3-axis accelerometers and 3-axis gyroscopes. The gyro bias are estimated online. A second nonlinear complementary filter (z-estimator) which combines 3-axis gyroscope readings with 3-axis magnetometer measurements, is also designed. ...

  19. SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION

    Science.gov (United States)

    Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...

  20. Bayesian estimation of one-parameter qubit gates

    OpenAIRE

    Teklu, Berihu; Olivares, Stefano; Paris, Matteo G. A.

    2008-01-01

    We address estimation of one-parameter unitary gates for qubit systems and seek for optimal probes and measurements. Single- and two-qubit probes are analyzed in details focusing on precision and stability of the estimation procedure. Bayesian inference is employed and compared with the ultimate quantum limits to precision, taking into account the biased nature of Bayes estimator in the non asymptotic regime. Besides, through the evaluation of the asymptotic a posteriori distribution for the ...

  1. The bias in GRACE estimates of continental water storage variations

    Directory of Open Access Journals (Sweden)

    R. Klees

    2006-11-01

    Full Text Available The estimation of terrestrial water storage variations at river basin scale is among the best documented applications of the GRACE (Gravity and Climate Experiment satellite gravity mission. In particular, it is expected that GRACE closes the water balance at river basin scale and allows the verification, improvement and modeling of the related hydrological processes by combining GRACE amplitude estimates with hydrological models' output and in-situ data.

    When computing monthly mean storage variations from GRACE gravity field models, spatial filtering is mandatory to reduce GRACE errors, but at the same time yields biased amplitude estimates.

    The objective of this paper is three-fold. Firstly, we want to compute and analyze amplitude and time behaviour of the bias in GRACE estimates of monthly mean water storage variations for several target areas in Southern Africa. In particular, we want to know the relation between bias and the choice of the filter correlation length, the size of the target area, and the amplitude of mass variations inside and outside the target area. Secondly, we want to know to what extent the bias can be corrected for using a priori information about mass variations. Thirdly, we want to quantify errors in the estimated bias due to uncertainties in the a priori information about mass variations that are used to compute the bias.

    The target areas are located in Southern Africa around the Zambezi river basin. The latest release of monthly GRACE gravity field models have been used for the period from January 2003 until March 2006. An accurate and properly calibrated regional hydrological model has been developed for this area and its surroundings and provides the necessary a priori information about mass variations inside and outside the target areas.

    The main conclusion of the study is that spatial smoothing significantly biases GRACE estimates of the amplitude of annual and monthly

  2. Weak Lensing Peak Finding: Estimators, Filters, and Biases

    CERN Document Server

    Schmidt, Fabian

    2010-01-01

    Large catalogs of shear-selected peaks have recently become a reality. In order to properly interpret the abundance and properties of these peaks, it is necessary to take into account the effects of the clustering of source galaxies, among themselves and with the lens. In addition, the preferred selection of lensed galaxies in a flux- and size-limited sample leads to fluctuations in the apparent source density which correlate with the lensing field (lensing bias). In this paper, we investigate these issues for two different choices of shear estimators which are commonly in use today: globally-normalized and locally-normalized estimators. While in principle equivalent, in practice these estimators respond differently to systematic effects such as lensing bias and cluster member dilution. Furthermore, we find that which estimator is statistically superior depends on the specific shape of the filter employed for peak finding; suboptimal choices of the estimator+filter combination can result in a suppression of t...

  3. Quantifying and controlling biases in dark matter halo concentration estimates

    CERN Document Server

    Poveda-Ruiz, C N; Muñoz-Cuartas, J C

    2016-01-01

    We use bootstrapping to estimate the bias of concentration estimates on N-body dark matter halos as a function of particle number. We find that algorithms based on the maximum radial velocity and radial particle binning tend to overestimate the concentration by 15%-20% for halos sampled with 200 particles and by 7% - 10% for halos sampled with 500 particles. To control this bias at low particle numbers we propose a new algorithm that estimates halo concentrations based on the integrated mass profile. The method uses the full particle information without any binning, making it reliable in cases when low numerical resolution becomes a limitation for other methods. This method reduces the bias to less than 3% for halos sampled with 200-500 particles. The velocity and density methods have to use halos with at least 4000 particles in order to keep the biases down to the same low level. We also show that the mass-concentration relationship could be shallower than expected once the biases of the different concentrat...

  4. A Method for Estimating BeiDou Inter-frequency Satellite Clock Bias

    Directory of Open Access Journals (Sweden)

    LI Haojun

    2016-02-01

    Full Text Available A new method for estimating the BeiDou inter-frequency satellite clock bias is proposed, considering the shortage of the current methods. The constant and variable parts of the inter-frequency satellite clock bias are considered in the new method. The data from 10 observation stations are processed to validate the new method. The characterizations of the BeiDou inter-frequency satellite clock bias are also analyzed using the computed results. The results of the BeiDou inter-frequency satellite clock bias indicate that it is stable in the short term. The estimated BeiDou inter-frequency satellite clock bias results are molded. The model results show that the 10 parameters of model for each satellite can express the BeiDou inter-frequency satellite clock bias well and the accuracy reaches cm level. When the model parameters of the first day are used to compute the BeiDou inter-frequency satellite clock bias of the second day, the accuracy also reaches cm level. Based on the stability and modeling, a strategy for the BeiDou satellite clock service is presented to provide the reference of our BeiDou.

  5. Selecting class weights to minimize classification bias in acreage estimation

    Science.gov (United States)

    Belcher, W. M.; Minter, T. C.

    1976-01-01

    Preliminary results of experiments being performed to select optimal class weights for use with the maximum likelihood classifier in acreage estimation using remote sensor imagery are presented. These weights will be optimal in the sense that the bias will be minimized in the proportion estimate obtained from the classification results by sample counting. The procedure was tested using Landsat MSS data from an 8 by 9.6 km area of ground truth in Finney County, Kansas.

  6. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-10-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on

  7. Parameter estimation and reliable fault detection of electric motors

    Institute of Scientific and Technical Information of China (English)

    Dusan PROGOVAC; Le Yi WANG; George YIN

    2014-01-01

    Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.

  8. Estimation of Synchronous Machine Parameters

    OpenAIRE

    Oddvar Hallingstad

    1980-01-01

    The present paper gives a short description of an interactive estimation program based on the maximum likelihood (ML) method. The program may also perform identifiability analysis by calculating sensitivity functions and the Hessian matrix. For the short circuit test the ML method is able to estimate the q-axis subtransient reactance x''q, which is not possible by means of the conventional graphical method (another set of measurements has to be used). By means of the synchronization and close...

  9. Self and other obedience estimates: biases and moderators.

    Science.gov (United States)

    Geher, Glenn; Bauman, Kathleen P; Hubbard, Sara Elizabeth Kay; Legare, Jared Richard

    2002-12-01

    The authors conducted 2 studies regarding behavior perceptions of "self" and "typical other" in hypothetical replications of S. Milgram's (1963) obedience experiment. In Study 1, participants' knowledge about Milgram's actual results was manipulated. Regardless of knowledge, results demonstrated several specific social and perceptual biases (e.g., the self-other bias; J. D. Brown, 1986), in addition to several general, fundamental lessons of social psychology (e.g., the perseverance of lay dispositionism). Study 2 was designed to explore the possibility that participants' own academic interests and worldview could influence the biases explicated in Study 1. The authors assessed perceptions of both criminal-justice majors and non-criminal-justice majors regarding their perceptions of behaviors of self and typical other. The criminal-justice students' self-other obedience estimates were significantly higher than those of the non-criminal-justice students. Further, the self-other discrepancy for criminal-justice students was significantly smaller than the difference reported by non-criminal-justice majors, suggesting that the criminal-justice students demonstrated the self-other bias significantly less than non-criminal-justice students in this context. The findings indicate that specific social-perceptual biases may have been moderated by career interest and worldview. PMID:12450343

  10. Joint MAP bias estimation and data association: simulations

    Science.gov (United States)

    Danford, Scott; Kragel, Bret; Poore, Aubrey

    2007-09-01

    The problem of joint maximum a posteriori (MAP) bias estimation and data association belongs to a class of nonconvex mixed integer nonlinear programming problems. These problems are difficult to solve due to both the combinatorial nature of the problem and the nonconvexity of the objective function or constraints. Algorithms for this class of problems have been developed in a companion paper of the authors. This paper presents simulations that compare the "all-pairs" heuristic, the k-best heuristic, and a partial A*-based branch and bound algorithm. The combination of the latter two algorithms is an excellent candidate for use in a realtime system. For an optimal algorithm that also computes the k-best solutions of the joint MAP bias estimation problem and data association problem, we investigate a branch and bound framework that employs either a depth-first algorithm or an A*-search procedure. In addition, we demonstrate the improvements due to a new gating procedure.

  11. Earth Rotation Parameter Estimation by GPS Observations

    Institute of Scientific and Technical Information of China (English)

    YAO Yibin

    2006-01-01

    The methods of Earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in detail. There are two different ways to estimate ERP: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. By comparing the estimated results with independent copyright program to IERS results, the residual systemic error can be found in estimated ERP with GPS observations.

  12. Estimating nonparticipation bias in a longitudinal study of bereavement.

    Science.gov (United States)

    Boyle, F M; Najman, J M; Vance, J C; Thearle, M J

    1996-10-01

    Nonparticipants in epidemiological studies may differ in important respects from participants but the magnitude of this potential bias is rarely quantified. This study estimates the effect of nonparticipation on estimates of mental health problems following stillbirth, neonatal death or sudden infant death syndrome. Of 805 families approached, 512 (64 per cent) were recruited, of whom 77 per cent of mothers and 71 per cent of fathers completed four study interviews. Younger, unmarried, unemployed parents without private health insurance were less often recruited, and even if recruited, were less likely to complete the interview. By evaluating several possible scenarios, we estimated that had mothers lost to follow-up remained in the study, anxiety rates would have varied by no more than +/-4 per cent. Relative risks associated with bereaved-control comparisons would have differed little from the observed estimate of 2.33. Estimating the effects of initial nonresponse is more difficult but the adoption of a worst-case scenario produced a relative risk of 3.47. Despite systematic nonparticipation suggestive of social disadvantage, attrition-related bias may have had only a modest effect on anxiety and depression rate estimates. However, this may not be the case when sample loss is high, when associations between attrition and outcome are strong, and when attrition-related behaviour differs across comparison groups. PMID:8987217

  13. Parameter Estimation in Multivariate Gamma Distribution

    Directory of Open Access Journals (Sweden)

    V S Vaidyanathan

    2015-05-01

    Full Text Available Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares. The proposed methodology is easy to implement and is free from calculus. It optimizes the objective function by searching over a wide range of values and determines the estimate of the parameters. The consistency of the estimates is demonstrated in terms of mean, standard deviation and mean square error through simulation studies for different choices of parameters

  14. Reduced bias and threshold choice in the extremal index estimation through resampling techniques

    Science.gov (United States)

    Gomes, Dora Prata; Neves, Manuela

    2013-10-01

    In Extreme Value Analysis there are a few parameters of particular interest among which we refer to the extremal index, a measure of extreme events clustering. It is of great interest for initial dependent samples, the common situation in many practical situations. Most semi-parametric estimators of this parameter show the same behavior: nice asymptotic properties but a high variance for small values of k, the number of upper order statistics used in the estimation and a high bias for large values of k. The Mean Square Error, a measure that encompasses bias and variance, usually shows a very sharp plot, needing an adequate choice of k. Using classical extremal index estimators considered in the literature, the emphasis is now given to derive reduced bias estimators with more stable paths, obtained through resampling techniques. An adaptive algorithm for estimating the level k for obtaining a reliable estimate of the extremal index is used. This algorithm has shown good results, but some improvements are still required. A simulation study will illustrate the properties of the estimators and the performance of the adaptive algorithm proposed.

  15. Estimation of physical parameters in induction motors

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  16. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...

  17. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  18. Estimation for large non-centrality parameters

    Science.gov (United States)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  19. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  20. Cosmological parameter estimation using Particle Swarm Optimization

    Science.gov (United States)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  1. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-04-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and computationally efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis in urban hydrology. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described by means of a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias (due to input and structural errors, and measurement errors. In our case study, the best performing bias description was the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to directly

  2. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  3. Estimation of distances to stars with stellar parameters from LAMOST

    CERN Document Server

    Carlin, Jeffrey L; Newberg, Heidi Jo; Beers, Timothy C; Chen, Li; Deng, Licai; Guhathakurta, Puragra; Hou, Jinliang; Hou, Yonghui; Lepine, Sebastien; Li, Guangwei; Luo, A-Li; Smith, Martin C; Wu, Yue; Yang, Ming; Yanny, Brian; Zhang, Haotong; Zheng, Zheng

    2015-01-01

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show ...

  4. Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise

    International Nuclear Information System (INIS)

    Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)

  5. Joint MAP bias estimation and data association: algorithms

    Science.gov (United States)

    Danford, Scott; Kragel, Bret; Poore, Aubrey

    2007-09-01

    The problem of joint maximum a posteriori (MAP) bias estimation and data association belongs to a class of nonconvex mixed integer nonlinear programming problems. These problems are difficult to solve due to both the combinatorial nature of the problem and the nonconvexity of the objective function or constraints. A specific problem that has received some attention in the tracking literature is that of the target object map problem in which one tries match a set of tracks as observed by two different sensors in the presence of biases, which are modeled here as a translation between the track states. The general framework also applies to problems in which the costs are general nonlinear functions of the biases. The goal of this paper is to present a class of algorithms based on the branch and bound framework and the "all-pairs" and k-best heuristics that provide a good initial upper bound for a branch and bound algorithm. These heuristics can be used as part of a real-time algorithm or as part of an "anytime algorithm" within the branch and bound framework. In addition, we consider both the A*-search and depth-first search procedures as well as several efficiency improvements such as gating. While this paper focuses on the algorithms, a second paper will focus on simulations.

  6. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    Science.gov (United States)

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  7. State and parameter estimation in bio processes

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)

    1994-12-31

    A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.

  8. The power spectrum of systematics in cosmic shear tomography and the bias on cosmological parameters

    CERN Document Server

    Cardone, V F; Calabrese, E; Galli, S; Huang, Z; Maoli, R; Melchiorri, A; Scaramella, R

    2013-01-01

    Cosmic shear tomography has emerged as one of the most promising tools to both investigate the nature of dark energy and discriminate between General Relativity and modified gravity theories. In order to successfully achieve these goals, systematics in shear measurements have to be taken into account; their impact on the weak lensing power spectrum has to be carefully investigated in order to estimate the bias induced on the inferred cosmological parameters. To this end, we develop here an efficient tool to compute the power spectrum of systematics by propagating, in a realistic way, shear measurement, source properties and survey setup uncertainties. Starting from analytical results for unweighted moments and general assumptions on the relation between measured and actual shear, we derive analytical expressions for the multiplicative and additive bias, showing how these terms depend not only on the shape measurement errors, but also on the properties of the source galaxies (namely, size, magnitude and spectr...

  9. Estimation of transmitter and receiver code biases using concurrent GNSS and ionosonde measurements

    Science.gov (United States)

    Sapundjiev, Danislav; Stankov, Stan; Verhulst, Tobias

    2016-07-01

    The total electron content (TEC) is an important ionospheric characteristic used extensively in ionosphere / space research and in various positioning / navigation applications based on Global Navigation Satellite System (GNSS) signals. TEC calculations using dual-frequency GNSS receivers is the norm nowadays but, for calculation of the absolute TEC, the correct estimation of the Differential Code Biases (DCB) is crucial. Various methods for estimation of these biases are currently in use and most of them make several (rather strong) assumptions concerning the ionosphere structure and state which do not necessarily represent the real situation. In this presentation we explore the opportunities offered by the modern high-resolution digital ionosonde measurements to deduce key ionospheric properties / parameters in order to develop a new algorithm for real-time DCB estimation and evaluate its performance.

  10. CMB anisotropy due to filamentary gas: power spectrum and cosmological parameter bias

    Energy Technology Data Exchange (ETDEWEB)

    Shimon, Meir; Sadeh, Sharon; Rephaeli, Yoel, E-mail: meirs@wise.tau.ac.il, E-mail: shrs@post.tau.ac.il, E-mail: yoelr@wise.tau.ac.il [School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel)

    2012-10-01

    Hot gas in filamentary structures induces CMB aniostropy through the SZ effect. Guided by results from N-body simulations, we model the morphology and gas properties of filamentary gas and determine the power spectrum of the anisotropy. Our treatment suggests that power levels can be an appreciable fraction of the cluster contribution at multipoles l∼<1500. Its spatially irregular morphology and larger characteristic angular scales can help to distinguish this SZ signature from that of clusters. In addition to intrinsic interest in this most extended SZ signal as a probe of filaments, its impact on cosmological parameter estimation should also be assessed. We find that filament 'noise' can potentially bias determination of A{sub s}, n{sub s}, and w (the normalization of the primordial power spectrum, the scalar index, and the dark energy equation of state parameter, respectively) by more than the nominal statistical uncertainty in Planck SZ survey data. More generally, when inferred from future optimal cosmic-variance-limited CMB experiments, we find that virtually all parameters will be biased by more than the nominal statistical uncertainty estimated for these next generation CMB experiments.

  11. Estimation of Modal Parameters and their Uncertainties

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune

    1999-01-01

    In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...

  12. MODFLOW-style parameters in underdetermined parameter estimation

    Science.gov (United States)

    D'Oria, Marco D.; Fienen, Michael N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  13. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  14. Minimally Corrective, Approximately Recovering Priors to Correct Expert Judgement in Bayesian Parameter Estimation

    OpenAIRE

    May, Thomas Joseph

    2015-01-01

    Bayesian parameter estimation is a popular method to address inverse problems. However, since prior distributions are chosen based on expert judgement, the method can inherently introduce bias into the understanding of the parameters. This can be especially relevant in the case of distributed parameters where it is difficult to check for error. To minimize this bias, we develop the idea of a minimally corrective, approximately recovering prior (MCAR prior) that generates a guide for the prior...

  15. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner, Holmes,…

  16. Error covariance calculation for forecast bias estimation in hydrologic data assimilation

    Science.gov (United States)

    Pauwels, Valentijn R. N.; De Lannoy, Gabriëlle J. M.

    2015-12-01

    To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.

  17. Person-Independent Head Pose Estimation Using Biased Manifold Embedding

    Directory of Open Access Journals (Sweden)

    Sethuraman Panchanathan

    2008-02-01

    Full Text Available Head pose estimation has been an integral problem in the study of face recognition systems and human-computer interfaces, as part of biometric applications. A fine estimate of the head pose angle is necessary and useful for several face analysis applications. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional image feature space. However, when there are face images of multiple individuals with varying pose angles, manifold learning techniques often do not give accurate results. In this work, we propose a framework for a supervised form of manifold learning called Biased Manifold Embedding to obtain improved performance in head pose angle estimation. This framework goes beyond pose estimation, and can be applied to all regression applications. This framework, although formulated for a regression scenario, unifies other supervised approaches to manifold learning that have been proposed so far. Detailed studies of the proposed method are carried out on the FacePix database, which contains 181 face images each of 30 individuals with pose angle variations at a granularity of 1∘. Since biometric applications in the real world may not contain this level of granularity in training data, an analysis of the methodology is performed on sparsely sampled data to validate its effectiveness. We obtained up to 2∘ average pose angle estimation error in the results from our experiments, which matched the best results obtained for head pose estimation using related approaches.

  18. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

  19. LISA parameter estimation using numerical merger waveforms

    Energy Technology Data Exchange (ETDEWEB)

    Thorpe, J I; McWilliams, S T; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G, E-mail: James.I.Thorpe@nasa.go [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States)

    2009-05-07

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10{sup 6} M{sub o-dot} at a redshift of z approx 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  20. LISA parameter estimation using numerical merger waveforms

    CERN Document Server

    Thorpe, J I; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G

    2008-01-01

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of one million Solar masses at a redshift of one were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  1. Parameter Estimation of Turbo Code Encoder

    Directory of Open Access Journals (Sweden)

    Mehdi Teimouri

    2014-01-01

    Full Text Available The problem of reconstruction of a channel code consists of finding out its design parameters solely based on its output. This paper investigates the problem of reconstruction of parallel turbo codes. Reconstruction of a turbo code has been addressed in the literature assuming that some of the parameters of the turbo encoder, such as the number of input and output bits of the constituent encoders and puncturing pattern, are known. However in practical noncooperative situations, these parameters are unknown and should be estimated before applying reconstruction process. Considering such practical situations, this paper proposes a novel method to estimate the above-mentioned code parameters. The proposed algorithm increases the efficiency of the reconstruction process significantly by judiciously reducing the size of search space based on an analysis of the observed channel code output. Moreover, simulation results show that the proposed algorithm is highly robust against channel errors when it is fed with noisy observations.

  2. LISA parameter estimation using numerical merger waveforms

    International Nuclear Information System (INIS)

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 106 Mo-dot at a redshift of z ∼ 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  3. Parameter estimation of the WMTD model

    Institute of Scientific and Technical Information of China (English)

    LUO Ji; QIU Hong-bing

    2009-01-01

    The MTD (mixture transition distribution) model based on Weibull distribution (WMTD model) is proposed in this paper, which is aimed at its parameter estimation. An EM algorithm for estimation is given and shown to work well by some simulations. And bootstrap method is used to obtain confidence regions for the parameters. Finally, the results of a real example--predicting stock prices--show that the WMTD model proposed is able to capture the features of the data from thick-tailed distribution better than GMTD (mixture transition distribution) model.

  4. Hurst Parameter Estimation Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    S..Ledesma-Orozco

    2011-08-01

    Full Text Available The Hurst parameter captures the amount of long-range dependence (LRD in a time series. There are severalmethods to estimate the Hurst parameter, being the most popular: the variance-time plot, the R/S plot, theperiodogram, and Whittle’s estimator. The first three are graphical methods, and the estimation accuracy depends onhow the plot is interpreted and calculated. In contrast, Whittle’s estimator is based on a maximum likelihood techniqueand does not depend on a graph reading; however, it is computationally expensive. A new method to estimate theHurst parameter is proposed. This new method is based on an artificial neural network. Experimental results showthat this method outperforms traditional approaches, and can be used on applications where a fast and accurateestimate of the Hurst parameter is required, i.e., computer network traffic control. Additionally, the Hurst parameterwas computed on series of different length using several methods. The simulation results show that the proposedmethod is at least ten times faster than traditional methods.

  5. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  6. Performance Analysis of Parameter Estimation Using LASSO

    OpenAIRE

    Panahi, Ashkan; Viberg, Mats

    2012-01-01

    The Least Absolute Shrinkage and Selection Operator (LASSO) has gained attention in a wide class of continuous parametric estimation problems with promising results. It has been a subject of research for more than a decade. Due to the nature of LASSO, the previous analyses have been non-parametric. This ignores useful information and makes it difficult to compare LASSO to traditional estimators. In particular, the role of the regularization parameter and super-resolution properties of LASSO h...

  7. Biosorption Parameter Estimation with Genetic Algorithm

    OpenAIRE

    Yung-Tse Hung; Eui Yong Kim; Xiao Feng; Khim Hoong Chu

    2011-01-01

    In biosorption research, a fairly broad range of mathematical models are used to correlate discrete data points obtained from batch equilibrium, batch kinetic or fixed bed breakthrough experiments. Most of these models are inherently nonlinear in their parameters. Some of the models have enjoyed widespread use, largely because they can be linearized to allow the estimation of parameters by least-squares linear regression. Selecting a model for data correlation appears to be dictated by the ea...

  8. Parameter Estimation of Noise Corrupted Sinusoids

    OpenAIRE

    O'Brien, Jr., W.,; Johnnie, Nathan

    2011-01-01

    Existing algorithms for fitting the parameters of a sinusoid to noisy discrete time observations are not always successful due to initial value sensitivity and other issues. This paper demonstrates the techniques of FIR filtering, Fast Fourier Transform, circular autocorreltion, and nonlinear least squares minimization as useful in the parameter estimation of amplitude, frequency and phase exemplified for a low-frequency time-delayed sinusoid describing simple harmonic motion. Alternative mea...

  9. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  10. ZASPE: Zonal Atmospheric Stellar Parameters Estimator

    Science.gov (United States)

    Brahm, Rafael; Jordan, Andres; Hartman, Joel; Bakos, Gaspar

    2016-07-01

    ZASPE (Zonal Atmospheric Stellar Parameters Estimator) computes the atmospheric stellar parameters (Teff, log(g), [Fe/H] and vsin(i)) from echelle spectra via least squares minimization with a pre-computed library of synthetic spectra. The minimization is performed only in the most sensitive spectral zones to changes in the atmospheric parameters. The uncertainities and covariances computed by ZASPE assume that the principal source of error is the systematic missmatch between the observed spectrum and the sythetic one that produces the best fit. ZASPE requires a grid of synthetic spectra and can use any pre-computed library minor modifications.

  11. Aquifer parameter estimation from surface resistivity data.

    Science.gov (United States)

    Niwas, Sri; de Lima, Olivar A L

    2003-01-01

    This paper is devoted to the additional use, other than ground water exploration, of surface geoelectrical sounding data for aquifer hydraulic parameter estimation. In a mesoscopic framework, approximated analytical equations are developed separately for saline and for fresh water saturations. A few existing useful aquifer models, both for clean and shaley sandstones, are discussed in terms of their electrical and hydraulic effects, along with the linkage between the two. These equations are derived for insight and physical understanding of the phenomenon. In a macroscopic scale, a general aquifer model is proposed and analytical relations are derived for meaningful estimation, with a higher level of confidence, of hydraulic parameter from electrical parameters. The physical reasons for two different equations at the macroscopic level are explicitly explained to avoid confusion. Numerical examples from existing literature are reproduced to buttress our viewpoint. PMID:12533080

  12. Rasch Model Parameter Estimation in the Presence of a Nonnormal Latent Trait Using a Nonparametric Bayesian Approach

    Science.gov (United States)

    Finch, Holmes; Edwards, Julianne M.

    2016-01-01

    Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…

  13. Evaluating treatment effectiveness under model misspecification: a comparison of targeted maximum likelihood estimation with bias-corrected matching

    OpenAIRE

    Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...

  14. Estimating the 3D pore size distribution of biopolymer networks from directionally biased data.

    Science.gov (United States)

    Lang, Nadine R; Münster, Stefan; Metzner, Claus; Krauss, Patrick; Schürmann, Sebastian; Lange, Janina; Aifantis, Katerina E; Friedrich, Oliver; Fabry, Ben

    2013-11-01

    The pore size of biopolymer networks governs their mechanical properties and strongly impacts the behavior of embedded cells. Confocal reflection microscopy and second harmonic generation microscopy are widely used to image biopolymer networks; however, both techniques fail to resolve vertically oriented fibers. Here, we describe how such directionally biased data can be used to estimate the network pore size. We first determine the distribution of distances from random points in the fluid phase to the nearest fiber. This distribution follows a Rayleigh distribution, regardless of isotropy and data bias, and is fully described by a single parameter--the characteristic pore size of the network. The bias of the pore size estimate due to the missing fibers can be corrected by multiplication with the square root of the visible network fraction. We experimentally verify the validity of this approach by comparing our estimates with data obtained using confocal fluorescence microscopy, which represents the full structure of the network. As an important application, we investigate the pore size dependence of collagen and fibrin networks on protein concentration. We find that the pore size decreases with the square root of the concentration, consistent with a total fiber length that scales linearly with concentration. PMID:24209841

  15. A class of shrinkage estimators for the shape parameter of the Weibull lifetime model

    Directory of Open Access Journals (Sweden)

    Zuhair Alhemyari

    2012-03-01

    Full Text Available In this paper, we propose two classes of shrinkage estimators for the shape parameter of the Weibull distribution in censored samples. The proposed estimators are studied theoretically and have been compared numerically with existing estimators. Computer intensive calculations for bias and relative efficiency show that for, different values of levels of significance and for varying constants involved in the proposed estimators, the proposed testimators fare better than classical and existing estimators

  16. Parameter estimation in channel network flow simulation

    Institute of Scientific and Technical Information of China (English)

    Han Longxi

    2008-01-01

    Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.

  17. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  18. Parameter estimation in channel network flow simulation

    Directory of Open Access Journals (Sweden)

    Han Longxi

    2008-03-01

    Full Text Available Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.

  19. Multiple Parameter Estimation With Quantized Channel Output

    CERN Document Server

    Mezghani, Amine; Nossek, Josef A

    2010-01-01

    We present a general problem formulation for optimal parameter estimation based on quantized observations, with application to antenna array communication and processing (channel estimation, time-of-arrival (TOA) and direction-of-arrival (DOA) estimation). The work is of interest in the case when low resolution A/D-converters (ADCs) have to be used to enable higher sampling rate and to simplify the hardware. An Expectation-Maximization (EM) based algorithm is proposed for solving this problem in a general setting. Besides, we derive the Cramer-Rao Bound (CRB) and discuss the effects of quantization and the optimal choice of the ADC characteristic. Numerical and analytical analysis reveals that reliable estimation may still be possible even when the quantization is very coarse.

  20. Sensor Placement for Modal Parameter Subset Estimation

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2016-01-01

    The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency...... responses carry on the selected modal parameter subset is, in some sense, maximized. The approach is validated in the context of a simple 10-DOF mass-spring-damper system by computing the variance of a set of identified modal parameters in a Monte Carlo setting for a set of sensor configurations, whose......). It is shown that the widely used Effective Independence (EI) method, which uses the modal amplitudes as surrogates for the parameters of interest, provides sensor configurations yielding theoretical lower bound variances whose maxima are up to 30 % larger than those obtained by use of the max-min approach....

  1. Estimation of accuracy and bias in genetic evaluations with genetic groups using sampling

    NARCIS (Netherlands)

    Hickey, J.M.; Keane, M.G.; Kenny, D.A.; Cromie, A.R.; Mulder, H.A.; Veerkamp, R.F.

    2008-01-01

    Accuracy and bias of estimated breeding values are important measures of the quality of genetic evaluations. A sampling method that accounts for the uncertainty in the estimation of genetic group effects was used to calculate accuracy and bias of estimated effects. The method works by repeatedly sim

  2. Biased Cosmology: Pivots, Parameters, and Figures of Merit

    Energy Technology Data Exchange (ETDEWEB)

    Linder, Eric V.

    2006-06-19

    In the quest for precision cosmology, one must ensure that the cosmology is accurate as well. We discuss figures of merit for determining from observations whether the dark energy is a cosmological constant or dynamical, with special attention to the best determined equation of state value, at the ``pivot'' or decorrelation redshift. We show this is not necessarily the best lever on testing consistency with the cosmological constant, and moreover is subject to bias. The standard parametrization of w(a)=w_0+w_a(1-a) by contrast is quite robust, as tested by extensions to higher order parametrizations and modified gravity. Combination of complementary probes gives strong immunization against inaccurate, but precise, cosmology.

  3. On closure parameter estimation in chaotic systems

    Directory of Open Access Journals (Sweden)

    J. Hakkarainen

    2012-02-01

    Full Text Available Many dynamical models, such as numerical weather prediction and climate models, contain so called closure parameters. These parameters usually appear in physical parameterizations of sub-grid scale processes, and they act as "tuning handles" of the models. Currently, the values of these parameters are specified mostly manually, but the increasing complexity of the models calls for more algorithmic ways to perform the tuning. Traditionally, parameters of dynamical systems are estimated by directly comparing the model simulations to observed data using, for instance, a least squares approach. However, if the models are chaotic, the classical approach can be ineffective, since small errors in the initial conditions can lead to large, unpredictable deviations from the observations. In this paper, we study numerical methods available for estimating closure parameters in chaotic models. We discuss three techniques: off-line likelihood calculations using filtering methods, the state augmentation method, and the approach that utilizes summary statistics from long model simulations. The properties of the methods are studied using a modified version of the Lorenz 95 system, where the effect of fast variables are described using a simple parameterization.

  4. Multiple emitter location and signal parameter estimation

    Science.gov (United States)

    Schmidt, R. O.

    1986-03-01

    Multiple signal classification (MUSIC) techniques involved in determining the parameters of multiple wavefronts arriving at an antenna array are discussed. A MUSIC algorithm is described, which provides asymptotically unbiased estimates of (1) the number of signals, (2) directions of arrival (or emitter locations), (3) strengths and cross correlations among the incident waveforms, and (4) the strength of noise/interference. The example of the use of the algorithm as a multiple frequency estimator operating on time series is examined. Comparisons of this method with methods based on maximum likelihood and maximum entropy, as well as conventional beamforming, are presented.

  5. Optimal estimation of free energies and stationary densities from multiple biased simulations

    CERN Document Server

    Wu, Hao

    2013-01-01

    When studying high-dimensional dynamical systems such as macromolecules, quantum systems and polymers, a prime concern is the identification of the most probable states and their stationary probabilities or free energies. Often, these systems have metastable regions or phases, prohibiting to estimate the stationary probabilities by direct simulation. Efficient sampling methods such as umbrella sampling, metadynamics and conformational flooding have developed that perform a number of simulations where the system's potential is biased such as to accelerate the rare barrier crossing events. A joint free energy profile or stationary density can then be obtained from these biased simulations with weighted histogram analysis method (WHAM). This approach (a) requires a few essential order parameters to be defined in which the histogram is set up, and (b) assumes that each simulation is in global equilibrium. Both assumptions make the investigation of high-dimensional systems with previously unknown energy landscape ...

  6. Rapid Compact Binary Coalescence Parameter Estimation

    Science.gov (United States)

    Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong

    2016-03-01

    The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.

  7. Measurement Data Modeling and Parameter Estimation

    CERN Document Server

    Wang, Zhengming; Yao, Jing; Gu, Defeng

    2011-01-01

    Measurement Data Modeling and Parameter Estimation integrates mathematical theory with engineering practice in the field of measurement data processing. Presenting the first-hand insights and experiences of the authors and their research group, it summarizes cutting-edge research to facilitate the application of mathematical theory in measurement and control engineering, particularly for those interested in aeronautics, astronautics, instrumentation, and economics. Requiring a basic knowledge of linear algebra, computing, and probability and statistics, the book illustrates key lessons with ta

  8. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    Science.gov (United States)

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-09-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter.

  9. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    Science.gov (United States)

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  10. GOCE gradiometer: estimation of biases and scale factors of all six individual accelerometers by precise orbit determination

    NARCIS (Netherlands)

    Visser, P.N.A.M.

    2008-01-01

    A method has been implemented and tested for estimating bias and scale factor parameters for all six individual accelerometers that will fly on-board of GOCE and together form the so-called gradiometer. The method is based on inclusion of the individual accelerometer observations in precise orbit de

  11. Taking Variable Correlation into Consideration during Parameter Estimation

    OpenAIRE

    T.J. Santos; Pinto, J C.

    1998-01-01

    Variable correlations are usually neglected during parameter estimation. Very frequently these are gross assumptions and may potentially lead to inadequate interpretation of final estimation results. For this reason, variable correlation and model parameters are sometimes estimated simultaneously in certain parameter estimation procedures. It is shown, however, that usually taking variable correlation into consideration during parameter estimation may be inadequate and unnecessary, unless ind...

  12. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    Science.gov (United States)

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  13. Estimating Production Potentials: Expert Bias in Applied Decision Making

    International Nuclear Information System (INIS)

    A study was conducted to evaluate how workers predict manufacturing production potentials given positively and negatively framed information. Findings indicate the existence of a bias toward positive information and suggest that this bias may be reduced with experience but is never the less maintained. Experts err in the same way non experts do in differentially processing negative and positive information. Additionally, both experts and non experts tend to overestimate production potentials in a positive direction. The authors propose that these biases should be addressed with further research including cross domain analyses and consideration in training, workplace design, and human performance modeling

  14. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  15. Clustering of dark matter tracers: renormalizing the bias parameters

    OpenAIRE

    McDonald, Patrick

    2006-01-01

    A commonly used perturbative method for computing large-scale clustering of tracers of mass density, like galaxies, is to model the tracer density field as a Taylor series in the local smoothed mass density fluctuations, possibly adding a stochastic component. I suggest a set of parameter redefinitions, eliminating problematic perturbative correction terms, that should represent a modest improvement, at least, to this method. As presented here, my method can be used to compute the power spect...

  16. A two parameter ratio-product-ratio estimator using auxiliary information

    CERN Document Server

    Chami, Peter S; Thomas, Doneal

    2012-01-01

    We propose a two parameter ratio-product-ratio estimator for a finite population mean in a simple random sample without replacement following the methodology in Ray and Sahai (1980), Sahai and Ray (1980), Sahai and Sahai (1985) and Singh and Ruiz Espejo (2003). The bias and mean square error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean square error than the sample mean, ratio and product estimators. We carry out an application showing that the proposed estimator outperforms the traditional estimators using groundwater data taken from a geological site in the state of Florida.

  17. Parameter estimation in tree graph metabolic networks.

    Science.gov (United States)

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings. PMID:27688960

  18. Constrained low-cost GPS/INS filter with encoder bias estimation for ground vehicles' applications

    Science.gov (United States)

    Abdel-Hafez, Mamoun F.; Saadeddin, Kamal; Amin Jarrah, Mohammad

    2015-06-01

    In this paper, a constrained, fault-tolerant, low-cost navigation system is proposed for ground vehicle's applications. The system is designed to provide a vehicle navigation solution at 50 Hz by fusing the measurements of the inertial measurement unit (IMU), the global positioning system (GPS) receiver, and the velocity measurement from wheel encoders. A high-integrity estimation filter is proposed to obtain a high accuracy state estimate. The filter utilizes vehicle velocity constraints measurement to enhance the estimation accuracy. However, if the velocity measurement of the encoder is biased, the accuracy of the estimate is degraded. Therefore, a noise estimation algorithm is proposed to estimate a possible bias in the velocity measurement of the encoder. Experimental tests, with simulated biases on the encoder's readings, are conducted and the obtained results are presented. The experimental results show the enhancement in the estimation accuracy when the simulated bias is estimated using the proposed method.

  19. Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

    Science.gov (United States)

    Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.

    2016-01-01

    Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…

  20. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  1. Misleading Population Estimates: Biases and Consistency of Visual Surveys and Matrix Modelling in the Endangered Bearded Vulture

    OpenAIRE

    Antoni Margalida; Daniel Oro; Ainara Cortés-Avizanda; Rafael Heredia; Donázar, José A.

    2011-01-01

    Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the "cryptic" fraction of the population) and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (...

  2. An Algorithm for Motion Parameter Direct Estimate

    Directory of Open Access Journals (Sweden)

    Caldelli Roberto

    2004-01-01

    Full Text Available Motion estimation in image sequences is undoubtedly one of the most studied research fields, given that motion estimation is a basic tool for disparate applications, ranging from video coding to pattern recognition. In this paper a new methodology which, by minimizing a specific potential function, directly determines for each image pixel the motion parameters of the object the pixel belongs to is presented. The approach is based on Markov random fields modelling, acting on a first-order neighborhood of each point and on a simple motion model that accounts for rotations and translations. Experimental results both on synthetic (noiseless and noisy and real world sequences have been carried out and they demonstrate the good performance of the adopted technique. Furthermore a quantitative and qualitative comparison with other well-known approaches has confirmed the goodness of the proposed methodology.

  3. Parameter estimation using B-Trees

    DEFF Research Database (Denmark)

    Schmidt, Albrecht; Bøhlen, Michael H.

    2004-01-01

    This paper presents a method for accelerating algorithms for computing common statistical operations like parameter estimation or sampling on B-Tree indexed data; the work was carried out in the context of visualisation of large scientific data sets. The underlying idea is the following: the shape...... of balanced data structures like B-Trees encodes and reflects data semantics according to the balance criterion. For example, clusters in the index attribute are somewhat likely to be present not only on the data or leaf level of the tree but should propagate up into the interior levels. The paper...... also hints at opportunities and limitations of this approach for visualisation of large data sets. The advantages of the method are manifold. Not only does it enable advanced algorithms through a performance boost for basic operations like density estimation, but it also builds on functionality that is...

  4. Parameter Estimation in Active Plate Structures

    DEFF Research Database (Denmark)

    Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.;

    2006-01-01

    through gradient based optimization techniques, while the second is based on a metamodel of the inverse problem, using artificial neural networks. A numerical higher order finite element laminated plate model is used in both methods and results are compared and discussed through a simulated......In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...

  5. Squared visibility estimator. Calibrating biases to reach very high dynamic range

    CERN Document Server

    Perrin, G

    2005-01-01

    In the near infrared where detectors are limited by read-out noise, most interferometers have been operated in wide band in order to benefit from larger photon rates. We analyze in this paper the biases caused by instrumental and turbulent effects to $V^2$ estimators for both narrow and wide band cases. Visibilities are estimated from samples of the interferogram using two different estimators, $V^{2}_1$ which is the classical sum of the squared modulus of Fourier components and a new estimator $V^{2}_2$ for which complex Fourier components are summed prior to taking the square. We present an approach for systematically evaluating the performance and limits of each estimator, and to optimizing observing parameters for each. We include the effects of spectral bandwidth, chromatic dispersion, scan length, and differential piston. We also establish the expression of the Signal-to-Noise Ratio of the two estimators with respect to detector and photon noise. The $V^{2}_1$ estimator is insensitive to dispersion and ...

  6. Estimating Infiltration Parameters from Basic Soil Properties

    Science.gov (United States)

    van de Genachte, G.; Mallants, D.; Ramos, J.; Deckers, J. A.; Feyen, J.

    1996-05-01

    Infiltration data were collected on two rectangular grids with 25 sampling points each. Both experimental grids were located in tropical rain forest (Guyana), the first in an Arenosol area and the second in a Ferralsol field. Four different infiltration models were evaluated based on their performance in describing the infiltration data. The model parameters were estimated using non-linear optimization techniques. The infiltration behaviour in the Ferralsol was equally well described by the equations of Philip, Green-Ampt, Kostiakov and Horton. For the Arenosol, the equations of Philip, Green-Ampt and Horton were significantly better than the Kostiakov model. Basic soil properties such as textural composition (percentage sand, silt and clay), organic carbon content, dry bulk density, porosity, initial soil water content and root content were also determined for each sampling point of the two grids. The fitted infiltration parameters were then estimated based on other soil properties using multiple regression. Prior to the regression analysis, all predictor variables were transformed to normality. The regression analysis was performed using two information levels. The first information level contained only three texture fractions for the Ferralsol (sand, silt and clay) and four fractions for the Arenosol (coarse, medium and fine sand, and silt and clay). At the first information level the regression models explained up to 60% of the variability of some of the infiltration parameters for the Ferralsol field plot. At the second information level the complete textural analysis was used (nine fractions for the Ferralsol and six for the Arenosol). At the second information level a principal components analysis (PCA) was performed prior to the regression analysis to overcome the problem of multicollinearity among the predictor variables. Regression analysis was then carried out using the orthogonally transformed soil properties as the independent variables. Results for

  7. Fast cosmological parameter estimation using neural networks

    CERN Document Server

    Auld, T; Hobson, M P; Gull, S F

    2006-01-01

    We present a method for accelerating the calculation of CMB power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called CosmoNet, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released Pico algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of CosmoNet by computing CMB power spectra over a box in the parameter space of flat \\Lambda CDM models containing the 3\\sigma WMAP1 confidence region. We also use CosmoNet to compute the WMAP3 likelihood for flat \\Lambda CDM models and show that marginalised posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3% of cosmic variance, and that CosmoNet is \\sim 7 \\times 10^4 faster than CAMB (for flat ...

  8. Cosmological parameter estimation: impact of CMB aberration

    CERN Document Server

    Catena, Riccardo

    2012-01-01

    The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a_lm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l=1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fidu...

  9. Performance of the maximum likelihood estimators for the parameters of multivariate generalized Gaussian distributions

    OpenAIRE

    Bombrun, Lionel; Pascal, Frédéric; Tourneret, Jean-Yves; Berthoumieu, Yannick

    2012-01-01

    This paper studies the performance of the maximum likelihood estimators (MLE) for the parameters of multivariate generalized Gaussian distributions. When the shape parameter belongs to ]0,1[, we have proved that the scatter matrix MLE exists and is unique up to a scalar factor. After providing some elements about this proof, an estimation algorithm based on a Newton-Raphson recursion is investigated. Some experiments illustrate the convergence speed of this algorithm. The bias and consistency...

  10. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...

  11. Effect of Bias Correction of Satellite-Rainfall Estimates on Runoff Simulations at the Source of the Upper Blue Nile

    Directory of Open Access Journals (Sweden)

    Emad Habib

    2014-07-01

    Full Text Available Results of numerous evaluation studies indicated that satellite-rainfall products are contaminated with significant systematic and random errors. Therefore, such products may require refinement and correction before being used for hydrologic applications. In the present study, we explore a rainfall-runoff modeling application using the Climate Prediction Center-MORPHing (CMORPH satellite rainfall product. The study area is the Gilgel Abbay catchment situated at the source basin of the Upper Blue Nile basin in Ethiopia, Eastern Africa. Rain gauge networks in such area are typically sparse. We examine different bias correction schemes applied locally to the CMORPH product. These schemes vary in the degree to which spatial and temporal variability in the CMORPH bias fields are accounted for. Three schemes are tested: space and time-invariant, time-variant and spatially invariant, and space and time variant. Bias-corrected CMORPH products were used to calibrate and drive the Hydrologiska Byråns Vattenbalansavdelning (HBV rainfall-runoff model. Applying the space and time-fixed bias correction scheme resulted in slight improvement of the CMORPH-driven runoff simulations, but in some instances caused deterioration. Accounting for temporal variation in the bias reduced the rainfall bias by up to 50%. Additional improvements were observed when both the spatial and temporal variability in the bias was accounted for. The rainfall bias was found to have a pronounced effect on model calibration. The calibrated model parameters changed significantly when using rainfall input from gauges alone, uncorrected, and bias-corrected CMORPH estimates. Changes of up to 81% were obtained for model parameters controlling the stream flow volume.

  12. Error and bias in size estimates of whale sharks: implications for understanding demography

    OpenAIRE

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G.

    2016-01-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected ...

  13. Finite-Sample Bias Propagation in Autoregressive Estimation With the Yule–Walker Method

    NARCIS (Netherlands)

    Broersen, P.M.T.

    2009-01-01

    The Yule-Walker (YW) method for autoregressive (AR) estimation uses lagged-product (LP) autocorrelation estimates to compute an AR parametric spectral model. The LP estimates only have a small triangular bias in the estimated autocorrelation function and are asymptotically unbiased. However, using t

  14. Estimation of distributional parameters for censored trace level water quality data. 2. Verification and applications

    Science.gov (United States)

    Helsel, D.R.; Gilliom, R.J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.

  15. Parameter estimation in LISA Pathfinder operational exercises

    CERN Document Server

    Nofrarias, Miquel; Congedo, Giuseppe; Hueller, Mauro; Armano, M; Diaz-Aguilo, M; Grynagier, A; Hewitson, M

    2011-01-01

    The LISA Pathfinder data analysis team has been developing in the last years the infrastructure and methods required to run the mission during flight operations. These are gathered in the LTPDA toolbox, an object oriented MATLAB toolbox that allows all the data analysis functionalities for the mission, while storing the history of all operations performed to the data, thus easing traceability and reproducibility of the analysis. The parameter estimation methods in the toolbox have been applied recently to data sets generated with the OSE (Off-line Simulations Environment), a detailed LISA Pathfinder non-linear simulator that will serve as a reference simulator during mission operations. These operational exercises aim at testing the on-orbit experiments in a realistic environment in terms of software and time constraints. These simulations, so called operational exercises, are the last verification step before translating these experiments into tele-command sequences for the spacecraft, producing therefore ve...

  16. Multifrequency SAR data for estimating hydrological parameters

    International Nuclear Information System (INIS)

    The sensitivity of backscattering coefficients to some geophysical parameters which play a significant role in hydrological processes (vegetation biomass, soil moisture and surface roughness) is discussed. Experimental results show that P-band makes it possible the monitoring of forest biomass, L-band appears to be good for wide-leaf crops, and C- and X-bands for small-leaf crops. Moreover, L-band backscattering makes the highest contribution in estimating soil moisture and surface roughness. The sensitivity to spatial distribution of soil moisture and surface roughness is rather low, since both quantities affect the radar signal. However, observing data collected at different dates and averaged over several fields, the correlation to soil moisture is significant, since the effects of spatial roughness variations are smoothed. The retrieval of both soil moisture and surface roughness has been performed by means of a semiempirical model

  17. Health Indicators: Eliminating bias from convenience sampling estimators

    OpenAIRE

    HEDT, Bethany L.; Pagano, Marcello

    2011-01-01

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing abou...

  18. Bias in estimating food consumption of fish from stomach-content analysis

    DEFF Research Database (Denmark)

    Rindorf, Anna; Lewy, Peter

    2004-01-01

    This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average ...... of other food in the stomach on evacuation, is suggested for estimating the intake of separate prey types. Simplifying the estimation by ignoring these factors biased estimates of consumption of individual prey types by up to 150% in a data example.......This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average......, a serious positive bias was introduced by estimating food intake from the contents of pooled stomach samples. An expression is given that can be used to correct analytically for this bias. A new method, which takes into account the distribution and evacuation of individual prey types as well as the effect...

  19. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  20. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes

    DEFF Research Database (Denmark)

    Wood, Lesley; Egger, Matthias; Gluud, Lise Lotte;

    2008-01-01

    To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome....

  1. Systematic biases on galaxy haloes parameters from Yukawa-like gravitational potentials

    CERN Document Server

    Cardone, V F

    2011-01-01

    A viable alternative to the dark energy as a solution of the cosmic speed up problem is represented by Extended Theories of Gravity. Should this be indeed the case, there will be an impact not only on cosmological scales, but also at any scale, from the Solar System to extragalactic ones. In particular, the gravitational potential can be different from the Newtonian one commonly adopted when computing the circular velocity fitted to spiral galaxies rotation curves. Phenomenologically modelling the modified point mass potential as the sum of a Newtonian and a Yukawa like correction, we simulate observed rotation curves for a spiral galaxy described as the sum of an exponential disc and a NFW dark matter halo. We then fit these curves assuming parameterized halo models (either with an inner cusp or a core) and using the Newtonian potential to estimate the theoretical rotation curve. Such a study allows us to investigate the bias on the disc and halo model parameters induced by the systematic error induced by fo...

  2. System and method for motor parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  3. Parameter estimation with Sandage-Loeb test

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin, E-mail: gengjiajia163@163.com, E-mail: jfzhang@mail.neu.edu.cn, E-mail: zhangxin@mail.neu.edu.cn [Department of Physics, College of Sciences, Northeastern University, Shenyang 110004 (China)

    2014-12-01

    The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.

  4. Parameter estimation with Sandage-Loeb test

    International Nuclear Information System (INIS)

    The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w0waCDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ωm and H0 in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ωm and H0 by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w0 and wa by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ωm, H0, and wa by more than 70%, 20%, and 60%, respectively, for the w0waCDM model

  5. Maximum likelihood estimation of the negative binomial dispersion parameter for highly overdispersed data, with applications to infectious diseases.

    Directory of Open Access Journals (Sweden)

    James O Lloyd-Smith

    Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.

  6. Estimation of high altitude Martian dust parameters

    Science.gov (United States)

    Pabari, Jayesh; Bhalodi, Pinali

    2016-07-01

    Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.

  7. Impact of Road Vehicle Accelerations on SAR-GMTI Motion Parameter Estimation

    OpenAIRE

    Baumgartner, Stefan; Gabele, Martina; Krieger, Gerhard; Bethke, Karl-Heinz; Zuev, Sergey

    2006-01-01

    In recent years many powerful techniques and algorithms have been developed to detect moving targets and estimate their motion parameters from single- or multi-channel SAR data. In case of single- and two-channel systems, most of the developed algorithms rely on analysis of the Doppler history. Nowadays it is known, that even small unconsidered across-track accelerations can bias the along-track velocity estimation. Since we want to monitor real and more complex traffic scenarios with a f...

  8. Health indicators: eliminating bias from convenience sampling estimators.

    Science.gov (United States)

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. PMID:21290401

  9. The influence of geomagnetic storms on the estimation of GPS instrumental biases

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2009-04-01

    Full Text Available An algorithm has been developed to derive the ionospheric total electron content (TEC and to estimate the resulting instrumental biases in Global Positioning System (GPS data from measurements made with a single receiver. The algorithm assumes that the TEC is identical at any point within a mesh and that the GPS instrumental biases do not vary within a day. We present some results obtained using the algorithm and a study of the characteristics of the instrumental biases during active geomagnetic periods. The deviations of the TEC during an ionospheric storm (induced by a geomagnetic storm, compared to the quiet ionosphere, typically result in severe fluctuations in the derived GPS instrumental biases. Based on the analysis of three ionospheric storm events, we conclude that different kinds of ionospheric storms have differing influences on the measured biases of GPS satellites and receivers. We find that the duration of severe ionospheric storms is the critical factor that adversely impacts the estimation of GPS instrumental biases. Large deviations in the TEC can produce inaccuracies in the estimation of GPS instrumental biases for the satellites that pass over the receiver during that period. We also present a semi quantitative analysis of the duration of the influence of the storm.

  10. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  11. Effect of percent non-detects on estimation bias in censored distributions

    Science.gov (United States)

    Zhang, Z.; Lennox, W. C.; Panu, U. S.

    2004-09-01

    Uniqueness of the problem surrounding non-detects has been a concern alike to researchers and statisticians dealing with summary statistics while analyzing censored data. To incorporate non-detects in the estimation process, a simple substitution by the MDL (method detection limit) and the maximum likelihood estimation method are routinely implemented as standard methods by US-EPA laboratories. In situations where numerical standards are set at or near the MDL by regulatory agencies, it is prudent and important to closely investigate both the variability in test measurements and the estimation bias, because an inference based on biased estimates could entail significant liabilities. Variability is understood to be not only inevitable but also an inherent and integral part of any chemical analysis or test. In situations where regulatory agencies fail to account for the inherently present variability of test measurements, there is a need for regulated facilities to seek remedial action merely as a consequence of inadequate statistical procedure. This paper utilizes a mathematical approach to derive the bias functions and resulting bias curves are developed to investigate the censored samples from a variety of probability distributions such as normal, log-normal, gamma, and Gumbel distributions. Finally, the bias functions and bias curves are also compared to the results obtained by using Monte Carlo simulations.

  12. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    International Nuclear Information System (INIS)

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation. (paper)

  13. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    Science.gov (United States)

    Wu, Zheming; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2015-12-01

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation.

  14. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    Science.gov (United States)

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  15. Understanding the physics driving the values of Lyman-alpha forest bias parameters

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anze

    2016-01-01

    With the advancement of Lyman-alpha forest power spectrum measurements to larger scales and to greater precision, it is crucial that we also improve our understanding of the bias between the measured flux and the underlying matter power spectrum, especially for future percent level cosmology constraints. In order to develop an intuition for the physics driving the values of the density and velocity bias parameters of the Lyman-alpha forest, we have run a series of hydrodynamic SPH simulations to test existing approximations found in the literature. Through a series of progressively more realistic scenarios, we first introduce flux based on the Fluctuating Gunn Peterson Approximation, just using the density fields, then introduce redshift space distortions, as well as thermal broadening, and finally, analyzing the full hydrodynamic part of the simulations. We find surprising agreement between the analytical approximations developed by Seljak (2012) and the numerical methods in the limit of linear redshift space-distortions and no thermal broadening. Specifically, we find that the prediction of the analytical velocity bias expression is exact in the limit of no thermal broadening, and speculate that the measurement of this bias along with a small-scale measurement of the flux PDF, could yield a possible probe of the thermal state of the IGM. A deeper understanding of the large-scale Lyman-alpha biasing will also help us in using the large-scale clustering of the forest as a cosmological probe beyond baryon acoustic oscillations.

  16. Minimizing Intra-Campaign Biases in Airborne Laser Altimetry By Thorough Calibration of Lidar System Parameters

    Science.gov (United States)

    Sonntag, J. G.; Chibisov, A.; Krabill, K. A.; Linkswiler, M. A.; Swenson, C.; Yungel, J.

    2015-12-01

    Present-day airborne lidar surveys of polar ice, NASA's Operation IceBridge foremost among them, cover large geographical areas. They are often compared with previous surveys over the same flight lines to yield mass balance estimates. Systematic biases in the lidar system, especially those which vary from campaign to campaign, can introduce significant error into these mass balance estimates and must be minimized before the data is released by the instrument team to the larger scientific community. NASA's Airborne Topographic Mapper (ATM) team designed a thorough and novel approach in order to minimize these biases, and here we describe two major aspects of this approach. First, we conduct regular ground vehicle-based surveys of lidar calibration targets, and overfly these targets on a near-daily basis during field campaigns. We discuss our technique for conducting these surveys, in particular the measures we take specifically to minimize systematic height biases in the surveys, since these can in turn bias entire campaigns of lidar data and the mass balance estimates based on them. Second, we calibrate our GPS antennas specifically for each instrument installation in a remote-sensing aircraft. We do this because we recognize that the metallic fuselage of the aircraft can alter the electromagnetic properties of the GPS antenna mounted to it, potentially displacing its phase center by several centimeters and biasing lidar results accordingly. We describe our technique for measuring the phase centers of a GPS antenna installed atop an aircraft, and show results which demonstrate that different installations can indeed alter the phase centers significantly.

  17. Univariate and Default Standard Unit Biases in Estimation of Body Weight and Caloric Content

    Science.gov (United States)

    Geier, Andrew B.; Rozin, Paul

    2009-01-01

    College students estimated the weight of adult women from either photographs or a live presentation by a set of models and estimated the calories in 1 of 2 actual meals. The 2 meals had the same items, but 1 had larger portion sizes than the other. The results suggest: (a) Judgments are biased toward transforming the example in question to the…

  18. Potential Biases in Estimating Absolute and Relative Case-Fatality Risks during Outbreaks.

    Directory of Open Access Journals (Sweden)

    Marc Lipsitch

    Full Text Available Estimating the case-fatality risk (CFR-the probability that a person dies from an infection given that they are a case-is a high priority in epidemiologic investigation of newly emerging infectious diseases and sometimes in new outbreaks of known infectious diseases. The data available to estimate the overall CFR are often gathered for other purposes (e.g., surveillance in challenging circumstances. We describe two forms of bias that may affect the estimation of the overall CFR-preferential ascertainment of severe cases and bias from reporting delays-and review solutions that have been proposed and implemented in past epidemics. Also of interest is the estimation of the causal impact of specific interventions (e.g., hospitalization, or hospitalization at a particular hospital on survival, which can be estimated as a relative CFR for two or more groups. When observational data are used for this purpose, three more sources of bias may arise: confounding, survivorship bias, and selection due to preferential inclusion in surveillance datasets of those who are hospitalized and/or die. We illustrate these biases and caution against causal interpretation of differential CFR among those receiving different interventions in observational datasets. Again, we discuss ways to reduce these biases, particularly by estimating outcomes in smaller but more systematically defined cohorts ascertained before the onset of symptoms, such as those identified by forward contact tracing. Finally, we discuss the circumstances in which these biases may affect non-causal interpretation of risk factors for death among cases.

  19. Estimating Population Parameters using the Structured Serial Coalescent with Bayesian MCMC Inference when some Demes are Hidden

    Directory of Open Access Journals (Sweden)

    Allen Rodrigo

    2006-01-01

    Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.

  20. Bayesian parameter estimation by continuous homodyne detection

    DEFF Research Database (Denmark)

    Kiilerich, Alexander Holm; Molmer, Klaus

    2016-01-01

    and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal......We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times...

  1. Bayesian parameter estimation by continuous homodyne detection

    Science.gov (United States)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2016-09-01

    We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal contribute to the ultimate sensitivity limit of homodyne detection.

  2. Bias-corrected Pearson estimating functions for Taylor’s power law applied to benthic macrofauna data

    DEFF Research Database (Denmark)

    Jørgensen, Bent; Demétrio, Clarice G.B.; Kristensen, Erik;

    2011-01-01

    Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...

  3. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens;

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...

  4. THEORETICAL ANALYSIS AND PRACTICE ON THE SELECTION OF KEY PARAMETERS FOR HORIZONTAL BIAS BURNER

    Institute of Scientific and Technical Information of China (English)

    刘泰生; 许晋源

    2003-01-01

    The air flow ratio and the pulverized-coal mass flux ratio between the rich and lean sides are the key parameters of horizontal bias burner. In order to realize high combustion efficiency, excellent stability of ignition, low NOx emission and safe operation, six principal demands are presented on the selection of key parameters. An analytical model is established on the basis of the demands, the fundamentals of combustion and the operation results. An improved horizontal bias burner is also presented and applied. The experiment and numerical simulation results show the improved horizontal bias burner can realize proper key parameters, lower NOx emission, high combustion efficiency and excellent performance of part load operation without oil support. It also can reduce the circumfluence and low velocity zone existing at the downstream sections of vanes, and avoid the burnout of the lean primary-air nozzle and the jam in the lean primary-air channel. The operation and test results verify the reasonableness and feasibility of the analytical model.

  5. METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    刘宇红; 曾衍钧; 许景锋; 张梅

    2004-01-01

    Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.

  6. Adaptive on-line estimation and control of overlay tool bias

    Science.gov (United States)

    Martinez, Victor M.; Finn, Karen; Edgar, Thomas F.

    2003-06-01

    Modern lithographic manufacturing processes rely on various types of exposure tools, used in a mix-and-match fashion. The motivation to use older tools alongside state-of-the-art tools is lower cost and one of the tradeoffs is a degradation in overlay performance. While average prices of semiconductor products continue to fall, the cost of manufacturing equipment rises with every product generation. Lithography processing, including the cost of ownership for tools, accounts for roughly 30% of the wafer processing costs, thus the importance of mix-and-match strategies. Exponentially Weighted Moving Average (EWMA) run-by-run controllers are widely used in the semiconductor manufacturing industry. This type of controller has been implemented successfully in volume manufacturing, improving Cpk values dramatically in processes like photolithography and chemical mechanical planarization. This simple, but powerful control scheme is well suited for adding corrections to compensate for Overlay Tool Bias (OTB). We have developed an adaptive estimation technique to compensate for overlay variability due to differences in the processing tools. The OTB can be dynamically calculated for each tool, based on the most recent measurements available, and used to correct the control variables. One approach to tracking the effect of different tools is adaptive modeling and control. The basic premise of an adaptive system is to change or adapt the controller as the operating conditions of the system change. Using closed-loop data, the adaptive control algorithm estimates the controller parameters using a recursive estimation technique. Once an updated model of the system is available, modelbased control becomes feasible. In the simplest scenario, the control law can be reformulated to include the current state of the tool (or its estimate) to compensate dynamically for OTB. We have performed simulation studies to predict the impact of deploying this strategy in production. The results

  7. Estimating Geophysical Parameters From Gravity Data

    Science.gov (United States)

    Sjogren, William L.; Wimberly, Ravenel N.

    1988-01-01

    ORBSIM program developed for accurate extraction of parameters of geophysical models from Doppler-radio-tracking data acquired from orbiting planetary spacecraft. Model of proposed planetary structure used in numerical integration along simulated trajectories of spacecraft around primary body. Written in FORTRAN 77.

  8. Estimation of motility parameters from trajectory data

    DEFF Research Database (Denmark)

    Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;

    2015-01-01

    Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little...... to which similar results may be obtained also for self-propelled particles....

  9. M-Testing Using Finite and Infinite Dimensional Parameter Estimators

    OpenAIRE

    White, Halbert; Hong, Yongmiao

    1999-01-01

    The m-testing approach provides a general and convenient framework in which to view and construct specification tests for econometric models. Previous m-testing frameworks only consider test statistics that involve finite dimensional parameter estimators and infinite dimensional parameter estimators affecting the limit distribution of the m-test statistics. In this paper we propose a new m-testing framework using both finite and infinite dimensional parameter estimators, where the latter may ...

  10. A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri;

    2014-01-01

    ) algorithm, which estimates the atom parameters along with the model order and weighting coefficients. Numerical experiments for spectral estimation with closely-spaced frequency components, show that the proposed SBL algorithm outperforms subspace and compressed sensing methods....

  11. Neural networks for estimation of ocean wave parameters

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Raju, D.H.

    Ocean wave parameters play a significant role in the design of all coastal and offshore structures. In the present study, neural networks are used to estimate various ocean wave parameters from theoretical Pierson-Moskowitz spectra as well...

  12. Self-bias Dependence on Process Parameters in Asymmetric Cylindrical Coaxial Capacitively Coupled Plasma

    CERN Document Server

    Upadhyay, J; Popović, S; Valente-Feliciano, A -M; Phillips, L; Vušković, L

    2015-01-01

    An rf coaxial capacitively coupled Ar/Cl2 plasma is applied to processing the inner wall of superconducting radio frequency cavities. A dc self-bias potential is established across the inner electrode sheath due to the surface area difference between inner and outer electrodes of the coaxial plasma. The self-bias potential measurement is used as an indication of the plasma sheath voltage asymmetry. The understanding of the asymmetry in sheath voltage distribution in coaxial plasma is important for the modification of the inner surfaces of three dimensional objects. The plasma sheath voltages were tailored to process the outer wall by providing an additional dc current to the inner electrode with the help of an external dc power supply. The dc self-bias potential is measured for different diameter electrodes and its variation on process parameters such as gas pressure, rf power and percentage of chlorine in the Ar/Cl2 gas mixture is studied. The dc current needed to overcome the self-bias potential to make it ...

  13. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    M Sinha; P K Kalra; K Kumar

    2000-04-01

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  14. Muscle parameters estimation based on biplanar radiography.

    Science.gov (United States)

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography. PMID:27082150

  15. On the shear estimation bias induced by the spatial variation of colour across galaxy profiles

    CERN Document Server

    Semboloni, Elisabetta; Huang, Zhuoyi; Cardone, Vincenzo; Cropper, Mark; Joachimi, Benjamin; Kitching, Thomas; Kuijken, Konrad; Lombardi, Marco; Maoli, Roberto; Mellier, Yannick; Miller, Lance; Rhodes, Jason; Scaramella, Roberto; Schrabback, Tim; Velander, Malin

    2012-01-01

    The spatial variation of the colour of a galaxy may introduce a bias in the measurement of its shape if the PSF profile depends on wavelength. We study how this bias depends on the properties of the PSF and the galaxies themselves. The bias depends on the scales used to estimate the shape, which may be used to optimise methods to reduce the bias. Here we develop a general approach to quantify the bias. Although applicable to any weak lensing survey, we focus on the implications for the ESA Euclid mission. Based on our study of synthetic galaxies we find that the bias is a few times 10^-3 for a typical galaxy observed by Euclid. Consequently, it cannot be neglected and needs to be accounted for. We demonstrate how one can do so using spatially resolved observations of galaxies in two filters. We show that HST observations in the F606W and F814W filters allow us to model and reduce the bias by an order of magnitude, sufficient to meet Euclid's scientific requirements. The precision of the correction is ultimate...

  16. Estimation of temperature impact on gamma-induced degradation parameters of N-channel MOS transistor

    International Nuclear Information System (INIS)

    The physical parameters of MOS transistors can be impressed by ionizing radiation and that leads to circuit degradation and failure. These effects require analyzing the basic mechanism that results in the buildup of induced defect in radiation environments. The reliable estimation also needs to consider external factors, particularly temperature fluctuations. I–V characteristic of the device was obtained using a temperature-dependent adapted form of charge-sheet model under heating cycle during irradiation with several ionizing dose levels at different gate biases. In this work, the analytical calculation for estimating the irradiation temperature impact on gamma-induced degradation parameters of N-channel MOS transistors at different gate biases was investigated. The experimental measurement was done in order to verify and parameterize the analytical model calculations. The results indicated that inserting irradiation temperature in the calculations caused a significant variation in radiation-induced MOS transistor parameters such as threshold voltage shift and off-state leakage current. According to the results, these variations were about 10.1% and 23.4% for voltage shifts and leakage currents respectively during investigated heating cycle for total dose of 20 krad at 9 V gate bias. - Highlights: • Reliable radiation effect estimations require considering external factors. • Irradiation temperature impact on degradation parameters of N-MOS was investigated. • An analytical model was utilized based on time dependent buildup of defect charges. • Oxide and interface trapped charges varied with irradiation temperature

  17. A field test of the extent of bias in selection estimates after accounting for emigration

    Science.gov (United States)

    Letcher, B.H.; Horton, G.E.; Dubreuil, T.L.; O'Donnell, M. J.

    2005-01-01

    Question: To what extent does trait-dependent emigration bias selection estimates in a natural system? Organisms: Two freshwater cohorts of Atlantic salmon (Salmo salar) juveniles. Field site: A 1 km stretch of a small stream (West Brook) in western Massachusetts. USA from which emigration could be detected continuously. Methods: Estimated viability selection differentials for body size either including or ignoring emigration (include = emigrants survived interval, ignore = emigrants did not survive interval) for 12 intervals. Results: Seasonally variable size-related emigration from our study site generated variable levels of bias in selection estimates for body size. The magnitude of this bias was closely related with the extent of size-dependent emigration during each interval. Including or ignoring the effects of emigration changed the significance of selection estimates in 5 of the 12 intervals, and changed the estimated direction of selection in 4 of the 12 intervals. These results indicate the extent to which inferences about selection in a natural system can be biased by failing to account for trait-dependent emigration. ?? 2005 Benjamin H. Letcher.

  18. Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro

    Directory of Open Access Journals (Sweden)

    Guohu Feng

    2013-02-01

    Full Text Available An actual account of the angle random walk (ARW coefficients of gyros in the constant rate biased rate ring laser gyro (RLG inertial navigation system (INS is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS.

  19. Cosmological parameter extraction and biases from type Ia supernova magnitude evolution

    Science.gov (United States)

    Linden, S.; Virey, J.-M.; Tilquin, A.

    2009-11-01

    We study different one-parametric models of type Ia supernova magnitude evolution on cosmic time scales. Constraints on cosmological and supernova evolution parameters are obtained by combined fits on the actual data coming from supernovae, the cosmic microwave background, and baryonic acoustic oscillations. We find that the best-fit values imply supernova magnitude evolution such that high-redshift supernovae appear some percent brighter than would be expected in a standard cosmos with a dark energy component. However, the errors on the evolution parameters are of the same order, and data are consistent with nonevolving magnitudes at the 1σ level, except for special cases. We simulate a future data scenario where SN magnitude evolution is allowed for, and neglect the possibility of such an evolution in the fit. We find the fiducial models for which the wrong model assumption of nonevolving SN magnitude is not detectable, and for which biases on the fitted cosmological parameters are introduced at the same time. Of the cosmological parameters, the overall mass density ΩM has the strongest chances to be biased due to the wrong model assumption. Whereas early-epoch models with a magnitude offset Δ m˜ z2 show up to be not too dangerous when neglected in the fitting procedure, late epoch models with Δ m˜√{z} have high chances of undetectably biasing the fit results. Centre de Physique Théorique is UMR 6207 - “Unité Mixte de Recherche” of CNRS and of the Universities “de Provence”, “de la Mediterranée”, and “du Sud Toulon-Var” - Laboratory affiliated with FRUMAM (FR2291).

  20. Control and Estimation of Distributed Parameter Systems

    CERN Document Server

    Kappel, F; Kunisch, K

    1998-01-01

    Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

  1. Parameter Estimation of the T-Book

    International Nuclear Information System (INIS)

    This paper summarizes the statistical assumptions and methods that have been used in the work on the T-book, a reliability data handbook which is used in safety analyses of nuclear power plants in Sweden and in the Swedish design plants in Finland. The author discusses the conceptual framework for the description and handling of uncertainty. He briefly outlines the two-stage 'Bayes empirical Bayes' method. To express the inherent tail-uncertainty in the distribution of failure rate, a class of contaminated distributions with three (hyper) parameters is proposed. Attention is focused on the properties of this T-book approach with regard to how it can be used to describe the parametric uncertainties, how uncertainty distributions can be used for predictive purposes, and how distributions can be updated

  2. Estimation of bias errors in angle-of-arrival measurements using platform motion

    Science.gov (United States)

    Grindlay, A.

    1981-08-01

    An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.

  3. Two self-test methods applied to an inertial system problem. [estimating gyroscope and accelerometer bias

    Science.gov (United States)

    Willsky, A. S.; Deyst, J. J.; Crawford, B. S.

    1975-01-01

    The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.

  4. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  5. Estimation of differential code biases for Beidou navigation system using multi-GNSS observations: How stable are the differential satellite and receiver code biases?

    Science.gov (United States)

    Xue, Junchen; Song, Shuli; Zhu, Wenyao

    2016-04-01

    Differential code biases (DCBs) are important parameters that must be estimated accurately and reliably for high-precision GNSS applications. For optimal operational service performance of the Beidou navigation system (BDS), continuous monitoring and constant quality assessment of the BDS satellite DCBs are crucial. In this study, a global ionospheric model was constructed based on a dual system BDS/GPS combination. Daily BDS DCBs were estimated together with the total electron content from 23 months' multi-GNSS observations. The stability of the resulting BDS DCB estimates was analyzed in detail. It was found that over a long period, the standard deviations (STDs) for all satellite B1-B2 DCBs were within 0.3 ns (average: 0.19 ns) and for all satellite B1-B3 DCBs, the STDs were within 0.36 ns (average: 0.22 ns). For BDS receivers, the STDs were greater than for the satellites, with most values STD between 28- and 7-day intervals was small, with a maximum not exceeding 0.06 ns. In almost all cases, the difference in BDS satellite DCBs between two consecutive days was <0.8 ns. The main conclusion is that because of the stability of the BDS DCBs, they only require occasional estimation or calibration. Furthermore, the 30-day averaged satellite DCBs can be used reliably for the most demanding BDS applications.

  6. An empirical study on memory bias situations and correction strategies in ERP effort estimation

    NARCIS (Netherlands)

    Erasmus, Pierre; Daneva, Maya; Amrahamsson, Pekka; Corral, Luis; Olivo, Markku; Russo, Barbara

    2016-01-01

    An Enterprise Resource Planning (ERP) project estimation process often relies on experts of various backgrounds to contribute judgments based on their professional experience. Such expert judgments however may not be biasfree. De-biasing techniques therefore have been proposed in the software estima

  7. Estimating non-response bias in a survey on alcohol consumption: comparison of response waves

    NARCIS (Netherlands)

    V.M. Lahaut; H.A.M. Jansen (Harrie); H. van de Mheen (Dike); H.F.L. Garretsen (Henk); J.E. Verdurmen; A. van Dijk (Bram)

    2003-01-01

    textabstractAIMS: According to 'the continuum of resistance model' late respondents can be used as a proxy for non-respondents in estimating non-response bias. In the present study, the validity of this model was explored and tested in three surveys on alcohol consumption. METHODS:

  8. A robust approach for space based sensor bias estimation in the presence of data association uncertainty

    Science.gov (United States)

    Belfadel, Djedjiga; Osborne, Richard; Bar-Shalom, Yaakov

    2015-06-01

    In this paper, an approach to bias estimation in the presence of measurement association uncertainty using common targets of opportunity, is developed. Data association is carried out before the estimation of sensor angle measurement biases. Consequently, the quality of data association is critical to the overall tracking performance. Data association becomes especially challenging if the sensors are passive. Mathematically, the problem can be formulated as a multidimensional optimization problem, where the objective is to maximize the generalized likelihood that the associated measurements correspond to common targets, based on target locations and sensor bias estimates. Applying gating techniques significantly reduces the size of this problem. The association likelihoods are evaluated using an exhaustive search after which an acceptance test is applied to each solution in order to obtain the optimal (correct) solution. We demonstrate the merits of this approach by applying it to a simulated tracking system, which consists of two satellites tracking a ballistic target. We assume the sensors are synchronized, their locations are known, and we estimate their orientation biases together with the unknown target locations.

  9. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  10. Cosmological Parameter Extraction and Biases from Type Ia Supernova Magnitude Evolution

    CERN Document Server

    Linden, Sebastian; Tilquin, Andre

    2009-01-01

    We study different one-parametric models of type Ia Supernova magnitude evolution on cosmic time scales. Constraints on cosmological and Supernova evolution parameters are obtained by combined fits on the actual data coming from Supernovae, the cosmic microwave background, and baryonic acoustic oscillations. We find that data prefer a magnitude evolution such that high-redshift Supernova are brighter than would be expected in a standard cosmos with a dark energy component. Data however are consistent with non-evolving magnitudes at the one-sigma level, except special cases. We simulate a future data scenario where SN magnitude evolution is allowed for, and neglect the possibility of such an evolution in the fit. We find the fiducial models for which the wrong model assumption of non-evolving SN magnitude is not detectable, and for which at the same time biases on the fitted cosmological parameters are introduced. Of the cosmological parameters the overall mass density has the strongest chances to be biased du...

  11. Estimation of ground water hydraulic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Hvilshoej, Soeren

    1998-11-01

    The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.

  12. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  13. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    OpenAIRE

    Jieming Ma; Ting, T. O.; Ka Lok Man; Nan Zhang; Sheng-Uei Guan; Wong, Prudence W. H.

    2013-01-01

    Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV) models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS) is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. S...

  14. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...... to estimating a CGE model of Mozambique...

  15. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  16. Towards physics responsible for large-scale Lyman-α forest bias parameters

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2016-03-01

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (bδ) and velocity gradient (bη) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermal broadening and linear redshift-space distortions. We also show that his bη formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of bη and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. We find that large-scale biases derived from the smoothed total matter field are within 10-20% to those based on hydrodynamical quantities, in line with other measurements in the literature.

  17. MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

    Directory of Open Access Journals (Sweden)

    Sugaya Yuki

    2012-08-01

    Full Text Available Abstract Background Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele. Findings Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package. Conclusions Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is

  18. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  19. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model.

    Science.gov (United States)

    Jiang, Shengyu; Wang, Chun; Weiss, David J

    2016-01-01

    Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916

  20. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model

    Directory of Open Access Journals (Sweden)

    Shengyu eJiang

    2016-02-01

    Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.

  1. The effect of heart motion on parameter bias in dynamic cardiac SPECT

    International Nuclear Information System (INIS)

    Dynamic cardiac SPECT can be used to estimate kinetic rate parameters which describe the wash-in and wash-out of tracer activity between the blood and the myocardial tissue. These kinetic parameters can in turn be correlated to myocardial perfusion. There are, however, many physical aspects associated with dynamic SPECT which can introduce errors into the estimates. This paper describes a study which investigates the effect of heart motion on kinetic parameter estimates. Dynamic SPECT simulations are performed using a beating version of the MCAT phantom. The results demonstrate that cardiac motion has a significant effect on the blood, tissue, and background content of regions of interest. This in turn affects estimates of wash-in, while it has very little effect on estimates of wash-out. The effect of cardiac motion on parameter estimates appears not to be as great as effects introduced by photon noise and geometric collimator response. It is also shown that cardiac motion results in little extravascular contamination of the left ventricle blood region of interest

  2. Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments

    CERN Document Server

    Fendick, Kerry

    2011-01-01

    This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...

  3. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

  4. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  5. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  6. Parameter estimation for the Pearson type 3 distribution using order statistics

    Science.gov (United States)

    Rocky Durrans, S.

    1992-05-01

    The Pearson type 3 distribution and its relatives, the log Pearson type 3 and gamma family of distributions, are among the most widely applied in the field of hydrology. Parameter estimation for these distributions has been accomplished using the method of moments, the methods of mixed moments and generalized moments, and the methods of maximum likelihood and maximum entropy. This study evaluates yet another estimation approach, which is based on the use of the properties of an extreme-order statistic. Based on the hypothesis that the population is distributed as Pearson type 3, this estimation approach yields both parameter and 100-year quantile estimators that have lower biases and variances than those of the method of moments approach as recommended by the US Water Resources Council.

  7. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    Science.gov (United States)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  8. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    Science.gov (United States)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  9. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  10. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  11. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The sea state parameters can be estimated by Bayesian Modelling which uses complex-valued frequency response functions (FRF) to estimate the wave spectrum on the basis...... of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...

  12. Parameter estimation during a transient - application to BWR stability

    Energy Technology Data Exchange (ETDEWEB)

    Tambouratzis, T. [Institute of Nuclear Technology - Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Athens 153 10 (Greece)]. E-mail: tatiana@ipta.demokritos.gr; Antonopoulos-Domis, M. [Institute of Nuclear Technology - Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Athens 153 10 (Greece)

    2004-12-01

    The estimation of system parameters is of obvious practical interest. During transient operation, these parameters are expected to change, whereby the system is rendered time-varying and classical signal processing techniques are not applicable. A novel methodology is proposed here, which combines wavelet multi-resolution analysis and selective wavelet coefficient removal with classical signal processing techniques in order to provide short-term estimates of the system parameters of interest. The use of highly overlapping time-windows further monitors the gradual changes in system parameter values. The potential of the proposed methodology is demonstrated with numerical experiments for the problem of stability evaluation of boiling water reactors during a transient.

  13. Parameter estimation during a transient - application to BWR stability

    International Nuclear Information System (INIS)

    The estimation of system parameters is of obvious practical interest. During transient operation, these parameters are expected to change, whereby the system is rendered time-varying and classical signal processing techniques are not applicable. A novel methodology is proposed here, which combines wavelet multi-resolution analysis and selective wavelet coefficient removal with classical signal processing techniques in order to provide short-term estimates of the system parameters of interest. The use of highly overlapping time-windows further monitors the gradual changes in system parameter values. The potential of the proposed methodology is demonstrated with numerical experiments for the problem of stability evaluation of boiling water reactors during a transient

  14. Numerical estimation of the noncompartmental pharmacokinetic parameters variance and coefficient of variation of residence times.

    Science.gov (United States)

    Purves, R D

    1994-02-01

    Noncompartmental investigation of the distribution of residence times from concentration-time data requires estimation of the second noncentral moment (AUM2C) as well as the area under the curve (AUC) and the area under the moment curve (AUMC). The accuracy and precision of 12 numerical integration methods for AUM2C were tested on simulated noisy data sets representing bolus, oral, and infusion concentration-time profiles. The root-mean-squared errors given by the best methods were only slightly larger than the corresponding errors in the estimation of AUC and AUMC. AUM2C extrapolated "tail" areas as estimated from a log-linear fit are biased, but the bias is minimized by application of a simple correction factor. The precision of estimates of variance of residence times (VRT) can be severely impaired by the variance of the extrapolated tails. VRT is therefore not a useful parameter unless the tail areas are small or can be shown to be estimated with little error. Estimates of the coefficient of variation of residence times (CVRT) and its square (CV2) are robust in the sense of being little affected by errors in the concentration values. The accuracy of estimates of CVRT obtained by optimum numerical methods is equal to or better than that of AUC and mean residence time estimates, even in data sets with large tail areas.

  15. Another Look at the EWMA Control Chart with Estimated Parameters

    NARCIS (Netherlands)

    N.A. Saleh; M.A. Mahmoud; L.A. Jones-Farmer; I. Zwetsloot; W.H. Woodall

    2015-01-01

    The authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.

  16. Kalman filter data assimilation: Targeting observations and parameter estimation

    International Nuclear Information System (INIS)

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

  17. Kalman filter data assimilation: targeting observations and parameter estimation.

    Science.gov (United States)

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  18. Kalman filter application for distributed parameter estimation in reactor systems

    International Nuclear Information System (INIS)

    An application of the Kalman filter has been developed for the real-time identification of a distributed parameter in a nuclear power plant. This technique can be used to improve numerical method-based best-estimate simulation of complex systems such as nuclear power plants. The application to a reactor system involves a unique modal model that approximates physical components, such as the reactor, as a coupled oscillator, i.e., a modal model with coupled modes. In this model both states and parameters are described by an orthogonal expansion. The Kalman filter with the sequential least-squares parameter estimation algorithm was used to estimate the modal coefficients of all states and one parameter. Results show that this state feedback algorithm is an effective way to parametrically identify a distributed parameter system in the presence of uncertainties

  19. Parameter estimation in deformable models using Markov chain Monte Carlo

    Science.gov (United States)

    Chalana, Vikram; Haynor, David R.; Sampson, Paul D.; Kim, Yongmin

    1997-04-01

    Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.

  20. Dynamic noise, chaos and parameter estimation in population biology

    OpenAIRE

    Stollenwerk, N.; Aguiar, M; Ballesteros, S.; Boto, J.; Kooi, B. W.; Mateus, L.

    2012-01-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models such as multi-strain dynamics to describe the virus–host interaction in dengue fever, even the most recently developed parameter estimation techniques, such as maximum likelihood iterated filtering, reach their computational limits. However, the fir...

  1. Sinusoidal Parameter Estimation Using Quadratic Interpolation around Power-Scaled Magnitude Spectrum Peaks

    Directory of Open Access Journals (Sweden)

    Kurt James Werner

    2016-10-01

    Full Text Available The magnitude of the Discrete Fourier Transform (DFT of a discrete-time signal has a limited frequency definition. Quadratic interpolation over the three DFT samples surrounding magnitude peaks improves the estimation of parameters (frequency and amplitude of resolved sinusoids beyond that limit. Interpolating on a rescaled magnitude spectrum using a logarithmic scale has been shown to improve those estimates. In this article, we show how to heuristically tune a power scaling parameter to outperform linear and logarithmic scaling at an equivalent computational cost. Although this power scaling factor is computed heuristically rather than analytically, it is shown to depend in a structured way on window parameters. Invariance properties of this family of estimators are studied and the existence of a bias due to noise is shown. Comparing to two state-of-the-art estimators, we show that an optimized power scaling has a lower systematic bias and lower mean-squared-error in noisy conditions for ten out of twelve common windowing functions.

  2. Simultaneous estimation of model state variables and observation and forecast biases using a two-stage hybrid Kalman filter

    Directory of Open Access Journals (Sweden)

    V. R. N. Pauwels

    2013-04-01

    Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the Discrete Kalman Filter, and the state variables using the Ensemble Kalman Filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware Ensemble Kalman Filter. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  3. Simultaneous Estimation of Model State Variables and Observation and Forecast Biases Using a Two-Stage Hybrid Kalman Filter

    Science.gov (United States)

    Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.

    2013-01-01

    In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  4. Approaches to radar reflectivity bias correction to improve rainfall estimation in Korea

    Science.gov (United States)

    You, Cheol-Hwan; Kang, Mi-Young; Lee, Dong-In; Lee, Jung-Tae

    2016-05-01

    Three methods for determining the reflectivity bias of single polarization radar using dual polarization radar reflectivity and disdrometer data (i.e., the equidistance line, overlapping area, and disdrometer methods) are proposed and evaluated for two low-pressure rainfall events that occurred over the Korean Peninsula on 25 August 2014 and 8 September 2012. Single polarization radar reflectivity was underestimated by more than 12 and 7 dB in the two rain events, respectively. All methods improved the accuracy of rainfall estimation, except for one case where drop size distributions were not observed, as the precipitation system did not pass through the disdrometer location. The use of these bias correction methods reduced the RMSE by as much as 50 %. Overall, the most accurate rainfall estimates were obtained using the overlapping area method to correct radar reflectivity.

  5. Evaluation of biases for inserted reactivity estimation of JCO criticality accident

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Toshihiro; Nakamura, Takemi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-02-01

    Biases in criticality calculation methods used in JCO criticality accident analyses were estimated to make accurate predictions of an inserted reactivity in the accident. MCNP 4B and pointwise cross section libraries based on JENDL-3.1, JENDL-3.2 and ENDF/B-VI were used for the criticality calculations. With these calculation methods, neutron effective multiplication factors were obtained for STACY critical experiments, which used 10 wt.% enriched aqueous uranium solutions, and for critical experiments performed at the Rocky Flats Plant, which used 93.2 wt.% enriched aqueous uranium solutions. As a result, biases in keff's for 18.8 wt.% enriched uranium solution of the JCO accident were estimated to be 0.0%, +1.2%, and 0.1% when using JENDL-3.1, JENDL-3.2 and ENDF/B-VI, respectively. (author)

  6. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    Science.gov (United States)

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  7. A FAST PARAMETER ESTIMATION ALGORITHM FOR POLYPHASE CODED CW SIGNALS

    Institute of Scientific and Technical Information of China (English)

    Li Hong; Qin Yuliang; Wang Hongqiang; Li Yanpeng; Li Xiang

    2011-01-01

    A fast parameter estimation algorithm is discussed for a polyphase coded Continuous Waveform (CW) signal in Additive White Gaussian Noise (AWGN).The proposed estimator is based on the sum of the modulus square of the ambiguity function at the different Doppler shifts.An iterative refinement stage is proposed to avoid the effect of the spurious peaks that arise when the summation length of the estimator exceeds the subcode duration.The theoretical variance of the subcode rate estimate is derived.The Monte-Carlo simulation results show that the proposed estimator is highly accurate and effective at moderate Signal-to-Noise Ratio (SNR).

  8. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  9. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048

  10. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  11. Local linear density estimation for filtered survival data, with bias correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    2009-01-01

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods......A class of local linear kernel density estimators based on weighted least-squares kernel estimation is considered within the framework of Aalen's multiplicative intensity model. This model includes the filtered data model that, in turn, allows for truncation and/or censoring in addition...... to accommodating unusual patterns of exposure as well as occurrence. It is shown that the local linear estimators corresponding to all different weightings have the same pointwise asymptotic properties. However, the weighting previously used in the literature in the i.i.d. case is seen to be far from optimal when...

  12. Local Linear Density Estimation for Filtered Survival Data with Bias Correction

    DEFF Research Database (Denmark)

    Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods......A class of local linear kernel density estimators based on weighted least squares kernel estimation is considered within the framework of Aalen's multiplicative intensity model. This model includes the filtered data model that, in turn, allows for truncation and/or censoring in addition...... to accommodating unusual patterns of exposure as well as occurrence. It is shown that the local linear estimators corresponding to all different weightings have the same pointwise asymptotic properties. However, the weighting previously used in the literature in the i.i.d. case is seen to be far from optimal when...

  13. Local Linear Density Estimation for Filtered Survival Data, with Bias Correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a `pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods......A class of local linear kernel density estimators based on weighted least squares kernel estimation is considered within the framework of Aalen's multiplicative intensity model. This model includes the filtered data model that, in turn, allows for truncation and/or censoring in addition...... to accommodating unusual patterns of exposure as well as occurrence. It is shown that the local linear estimators corresponding to all different weightings have the same pointwise asymptotic properties. However, the weighting previously used in the literature in the i.i.d. case is seen to be far from optimal when...

  14. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...

  15. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  16. Reducing the bias of estimates of genotype by environment interactions in random regression sire models

    OpenAIRE

    Meuwissen Theo HE; Ødegård Jørgen; Lillehammer Marie

    2009-01-01

    Abstract The combination of a sire model and a random regression term describing genotype by environment interactions may lead to biased estimates of genetic variance components because of heterogeneous residual variance. In order to test different models, simulated data with genotype by environment interactions, and dairy cattle data assumed to contain such interactions, were analyzed. Two animal models were compared to four sire models. Models differed in their ability to handle heterogeneo...

  17. Variance gamma process simulation and it's parameters estimation

    OpenAIRE

    Kuzmina, A. V.

    2010-01-01

    Variance gamma process is a three parameter process. Variance gamma process is simulated as a gamma time-change Brownian motion and as a difference of two independent gamma processes. Estimations of simulated variance gamma process parameters are presented in this paper.

  18. Computational methods for estimation of parameters in hyperbolic systems

    Science.gov (United States)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  19. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    . Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...

  20. Estimation of Parameters of the Beta-Extreme Value Distribution

    Directory of Open Access Journals (Sweden)

    Zafar Iqbal

    2008-09-01

    Full Text Available In this research paper The Beta Extreme Value Type (III distribution which is developed by Zafar and Aleem (2007 is considered and parameters are estimated by using moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m’ & ‘n’ are real and moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m��� & ‘n’ are integers and then a Comparison between rth moments about origin when parameters are ‘m’ & ‘n’ are real and when parameters are ‘m’ & ‘n’ are integers. At the end second method, method of Maximum Likelihood is used to estimate the unknown parameters of the Beta Extreme Value Type (III distribution.

  1. Performances of Different Algorithms for Tracer Kinetics Parameters Estimation in Breast DCE-MRI

    Directory of Open Access Journals (Sweden)

    Roberta Fusco

    2014-07-01

    Full Text Available Objective of this study was to evaluate the performances of different algorithms for tracer kinetics parameters estimation in breast Dynamic Contrast Enhanced-MRI. We considered four algorithms: two non-iterative algorithms based on impulsive and linear approximation of the Arterial Input Function respectively; and two iterative algorithms widely used for non-linear regression (Levenberg-Marquardt, LM and VARiable PROjection, VARPRO. Per each value of the kinetic parameters within a physiological range, we simulated 100 noisy curves and estimated the parameters with all algorithms. Sampling time, total duration and noise level have been chosen as in a typical breast examination. We compared the performances with respect to the Cramer-Rao Lower Bound (CRLB. Moreover, in order to gain further insight we applied the algorithms to a real breast examination. Accuracy of all the methods depends on the specific value of the parameters. The methods are in general biased: however, VARPRO showed small bias in a region of the parameter space larger than the other methods; moreover, VARPRO approached CRLB and the number of iterations were smaller than LM. In the specific conditions analyzed, VARPRO showed better performances with respect to LM and to non-iterative algorithms

  2. The impact of spurious shear on cosmological parameter estimates from weak lensing observables

    CERN Document Server

    Petri, Andrea; Haiman, Zoltan; Kratochvil, Jan M

    2014-01-01

    Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear with a power spectrum estimated for the LSST instrument. This allows us to quantify the errors and biases of the triplet $(\\Omega_m,w,\\sigma_8)$ derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MF), low--order moments (LM), and peak counts (PK). Our main results are: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of $\\sigma^2_{sys}\\approx 10^{-7}$, biases from the PS and LM would be ...

  3. Bayesian parameter estimation for nonlinear modelling of biological pathways

    Directory of Open Access Journals (Sweden)

    Ghasemi Omid

    2011-12-01

    Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly

  4. A new relative efficiency in parameter estimation for linear model

    Institute of Scientific and Technical Information of China (English)

    YANG Hu; CHEN Zhu-liang

    2007-01-01

    A new relative efficiency of parameter estimation for generalized Gauss-Markov linear model was proposed. Its lower bound was also derived. Its properties were explored in comparison with three currently very popular relative efficiencies. The new relative efficiency not only reflects sensitively the error and loss caused by the substitution of the least square estimator for the best linear unbiased estimator, but also overcomes the disadvantage of weak dependence on the design matrix.

  5. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    to the MPEG stream. This may be used in systems and applications where the coded stream is not accessible. Detection of MPEG I-frames and DCT (discrete cosine transform) block size is presented. For the I-frames, the quantization parameters are estimated. Combining these with statistics of the reconstructed...... DCT coefficients, the PSNR is estimated from the decoded video without reference images. Tests on decoded fixed rate MPEG2 sequences demonstrate perfect detection rates and good performance of the PSNR estimation....

  6. Degeneracy in model parameter estimation for multi-compartmental diffusion in neuronal tissue.

    Science.gov (United States)

    Jelescu, Ileana O; Veraart, Jelle; Fieremans, Els; Novikov, Dmitry S

    2016-01-01

    The ultimate promise of diffusion MRI (dMRI) models is specificity to neuronal microstructure, which may lead to distinct clinical biomarkers using noninvasive imaging. While multi-compartment models are a common approach to interpret water diffusion in the brain in vivo, the estimation of their parameters from the dMRI signal remains an unresolved problem. Practically, even when q space is highly oversampled, nonlinear fit outputs suffer from heavy bias and poor precision. So far, this has been alleviated by fixing some of the model parameters to a priori values, for improved precision at the expense of accuracy. Here we use a representative two-compartment model to show that fitting fails to determine the five model parameters from over 60 measurement points. For the first time, we identify the reasons for this poor performance. The first reason is the existence of two local minima in the parameter space for the objective function of the fitting procedure. These minima correspond to qualitatively different sets of parameters, yet they both lie within biophysically plausible ranges. We show that, at realistic signal-to-noise ratio values, choosing between the two minima based on the associated objective function values is essentially impossible. Second, there is an ensemble of very low objective function values around each of these minima in the form of a pipe. The existence of such a direction in parameter space, along which the objective function profile is very flat, explains the bias and large uncertainty in parameter estimation, and the spurious parameter correlations: in the presence of noise, the minimum can be randomly displaced by a very large amount along each pipe. Our results suggest that the biophysical interpretation of dMRI model parameters crucially depends on establishing which of the minima is closer to the biophysical reality and the size of the uncertainty associated with each parameter. PMID:26615981

  7. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  8. Bias Estimations for Ill-posed Problem of Celestial Positioning Using the Sun and Precision Analysis

    Directory of Open Access Journals (Sweden)

    ZHAN Yinhu

    2016-08-01

    Full Text Available Lunar/Mars rovers own sun sensors for navigation, however, long-time tracking for the sun impacts on the real-time activity of navigation. Absolute positioning method by observing the sun with a super short tracking period such as 1 or 2 minutes is researched in this paper. Linear least squares model of altitude positioning method is deduced, and the ill-posed problem of celestial positioning using the sun is brought out for the first time. Singular value decomposition method is used to diagnose the ill-posed problem, and different bias estimations are employed and compared by simulative calculations. Results of the calculations indicate the superiority of bias estimations which can effectively improve initial values. However, bias estimations are greatly impacted by initial values, because the initial values converge at a line which passes by the real value and is vertical relative to the orientation of the sun. The research of this paper is of some value to application.

  9. Thermophysical Property Estimation by Transient Experiments: The Effect of a Biased Initial Temperature Distribution

    Directory of Open Access Journals (Sweden)

    Federico Scarpa

    2015-01-01

    Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.

  10. How cognitive biases can distort environmental statistics: introducing the rough estimation task.

    Science.gov (United States)

    Wilcockson, Thomas D W; Pothos, Emmanuel M

    2016-04-01

    The purpose of this study was to develop a novel behavioural method to explore cognitive biases. The task, called the Rough Estimation Task, simply involves presenting participants with a list of words that can be in one of three categories: appetitive words (e.g. alcohol, food, etc.), neutral related words (e.g. musical instruments) and neutral unrelated words. Participants read the words and are then asked to state estimates for the percentage of words in each category. Individual differences in the propensity to overestimate the proportion of appetitive stimuli (alcohol-related or food-related words) in a word list were associated with behavioural measures (i.e. alcohol consumption, hazardous drinking, BMI, external eating and restrained eating, respectively), thereby providing evidence for the validity of the task. The task was also found to be associated with an eye-tracking attentional bias measure. The Rough Estimation Task is motivated in relation to intuitions with regard to both the behaviour of interest and the theory of cognitive biases in substance use.

  11. Systematic errors in low latency gravitational wave parameter estimation impact electromagnetic follow-up observations

    CERN Document Server

    Littenberg, Tyson B; Coughlin, Scott; Kalogera, Vicky

    2016-01-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects' spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by $>5\\sigma$ using simple-precession waveforms and in excess of $20\\sigma$ when non-spinning templates are employed This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find searched areas are up to a ...

  12. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  13. Regressions by leaps and bounds and biased estimation techniques in yield modeling

    Science.gov (United States)

    Marquina, N. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. It was observed that OLS was not adequate as an estimation procedure when the independent or regressor variables were involved in multicollinearities. This was shown to cause the presence of small eigenvalues of the extended correlation matrix A'A. It was demonstrated that the biased estimation techniques and the all-possible subset regression could help in finding a suitable model for predicting yield. Latent root regression was an excellent tool that found how many predictive and nonpredictive multicollinearities there were.

  14. The Robustness Optimization of Parameter Estimation in Chaotic Control Systems

    Directory of Open Access Journals (Sweden)

    Zhen Xu

    2014-10-01

    Full Text Available Standard particle swarm optimization algorithm has problems of bad adaption and weak robustness in the parameter estimation model of chaotic control systems. In light of this situation, this paper puts forward a new estimation model based on improved particle swarm optimization algorithm. It firstly constrains the search space of the population with Tent and Logistic double mapping to regulate the initialized population size, optimizes the fitness value by evolutionary state identification strategy so as to avoid its premature convergence, optimizes the inertia weight by the nonlinear decrease strategy to reach better global and local optimal solution, and then optimizes the iteration of particle swarm optimization algorithm with the hybridization concept from genetic algorithm. Finally, this paper applies it into the parameter estimation of chaotic systems control. Simulation results show that the proposed parameter estimation model shows higher accuracy, anti-noise ability and robustness compared with the model based on standard particle swarm optimization algorithm.

  15. Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems

    DEFF Research Database (Denmark)

    Knudsen, Morten

    variance and confidence ellipsoid is demonstrated. The relation is based on a new theorem on maxima of an ellipsoid. The procedure for input signal design and physical parameter estimation is tested on a number of examples, linear as well as nonlinear and simulated as well as real processes, and it appears......Estimation of physical parameters is an important subclass of system identification. The specific objective is to obtain accurate estimates of the model parameters, while the objective of other aspects of system identification might be to determine a model where other properties, such as responses...... for certain input in the time or frequency domain, are emphasised. Consequently, some special techniques are required, in particular for input signal design and model validation. The model structure containing physical parameters is constructed from basic physical laws (mathematical modelling). It is possible...

  16. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  17. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    Science.gov (United States)

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  18. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  19. The Minimax Estimator of Stochastic Regression Coefficients and Parameters in the Class of All Estimators

    Institute of Scientific and Technical Information of China (English)

    Li Wen XU; Song Gui WANG

    2007-01-01

    In this paper, the authors address the problem of the minimax estimator of linear com-binations of stochastic regression coefficients and parameters in the general normal linear model with random effects. Under a quadratic loss function, the minimax property of linear estimators is inves- tigated. In the class of all estimators, the minimax estimator of estimable functions, which is unique with probability 1, is obtained under a multivariate normal distribution.

  20. Re-constructing historical Adelie penguin abundance estimates by retrospectively accounting for detection bias.

    Directory of Open Access Journals (Sweden)

    Colin Southwell

    Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  1. Sources of bias in peoples' social-comparative estimates of food consumption.

    Science.gov (United States)

    Scherer, Aaron M; Bruchmann, Kathryn; Windschitl, Paul D; Rose, Jason P; Smith, Andrew R; Koestner, Bryan; Snetselaar, Linda; Suls, Jerry

    2016-06-01

    Understanding how healthfully people think they eat compared to others has implications for their motivation to engage in dietary change and the adoption of health recommendations. Our goal was to investigate the scope, sources, and measurements of bias in comparative food consumption beliefs. Across 4 experiments, participants made direct comparisons of how their consumption compared to their peers' consumption and/or estimated their personal consumption of various foods/nutrients and the consumption by peers, allowing the measurement of indirect comparisons. Critically, the healthiness and commonness of the foods varied. When the commonness and healthiness of foods both varied, indirect comparative estimates were more affected by the healthiness of the food, suggesting a role for self-serving motivations, while direct comparisons were more affected by the commonness of the food, suggesting egocentrism as a nonmotivated source of comparative bias. When commonness did not vary, the healthiness of the foods impacted both direct and indirect comparisons, with a greater influence on indirect comparisons. These results suggest that both motivated and nonmotivated sources of bias should be taken into account when creating interventions aimed at improving eating habits and highlights the need for researchers to be sensitive to how they measure perceptions of comparative eating habits. (PsycINFO Database Record PMID:27054551

  2. Parameter estimation using a complete signal and inspiral templates for nonspinning low mass binary black holes with Advanced LIGO sensitivity

    CERN Document Server

    Cho, Hee-Suk

    2015-01-01

    We study the validity of the inspiral templates in gravitational wave data analysis for nonspinning binary black holes with Advanced LIGO sensitivity. We use the phenomenological waveform model, which contains the inspiral-merger-ring down (IMR) phases defined in the Fourier domain. For parameter estimation purposes, we calculate the statistical errors assuming the IMR signals and IMR templates for the binaries with total masses M $\\leq$ 30Msun. Especially, we explore the systematic biases caused by a mismatch between the IMR signal model (IMR) and inspiral template model (Imerg), and investigate the impact on the parameter estimation accuracy by comparing the biases with the statistical errors. For detection purposes, we calculate the fitting factors of the inspiral templates with respect to the IMR signals. We find that the valid criteria for Imerg templates are obtained by Mcrit ~ 24Msun (if M < Mcrit, the fitting factor is higher than 0.97) for detection and M < 26Msun (where the systematic bias is ...

  3. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  4. Evaluating parasite densities and estimation of parameters in transmission systems

    Directory of Open Access Journals (Sweden)

    Heinzmann D.

    2008-09-01

    Full Text Available Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence or estimates of the parasite biomass within the host population (abundance or intensity estimates. Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.

  5. Assessment of exploration bias in data-driven predictive models and the estimation of undiscovered resources

    Science.gov (United States)

    Coolbaugh, M.F.; Raines, G.L.; Zehner, R.E.

    2007-01-01

    The spatial distribution of discovered resources may not fully mimic the distribution of all such resources, discovered and undiscovered, because the process of discovery is biased by accessibility factors (e.g., outcrops, roads, and lakes) and by exploration criteria. In data-driven predictive models, the use of training sites (resource occurrences) biased by exploration criteria and accessibility does not necessarily translate to a biased predictive map. However, problems occur when evidence layers correlate with these same exploration factors. These biases then can produce a data-driven model that predicts known occurrences well, but poorly predicts undiscovered resources. Statistical assessment of correlation between evidence layers and map-based exploration factors is difficult because it is difficult to quantify the "degree of exploration." However, if such a degree-of-exploration map can be produced, the benefits can be enormous. Not only does it become possible to assess this correlation, but it becomes possible to predict undiscovered, instead of discovered, resources. Using geothermal systems in Nevada, USA, as an example, a degree-of-exploration model is created, which then is resolved into purely explored and unexplored equivalents, each occurring within coextensive study areas. A weights-of-evidence (WofE) model is built first without regard to the degree of exploration, and then a revised WofE model is calculated for the "explored fraction" only. Differences in the weights between the two models provide a correlation measure between the evidence and the degree of exploration. The data used to build the geothermal evidence layers are perceived to be independent of degree of exploration. Nevertheless, the evidence layers correlate with exploration because exploration has preferred the same favorable areas identified by the evidence patterns. In this circumstance, however, the weights for the "explored" WofE model minimize this bias. Using these revised

  6. Estimation of bias and variance of measurements made from tomography scans

    Science.gov (United States)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  7. Estimation of dynamical model parameters taking into account undetectable marker values

    Directory of Open Access Journals (Sweden)

    Trimoulet Pascale

    2006-08-01

    Full Text Available Abstract Background Mathematical models are widely used for studying the dynamic of infectious agents such as hepatitis C virus (HCV. Most often, model parameters are estimated using standard least-square procedures for each individual. Hierarchical models have been proposed in such applications. However, another issue is the left-censoring (undetectable values of plasma viral load due to the lack of sensitivity of assays used for quantification. A method is proposed to take into account left-censored values for estimating parameters of non linear mixed models and its impact is demonstrated through a simulation study and an actual clinical trial of anti-HCV drugs. Methods The method consists in a full likelihood approach distinguishing the contribution of observed and left-censored measurements assuming a lognormal distribution of the outcome. Parameters of analytical solution of system of differential equations taking into account left-censoring are estimated using standard software. Results A simulation study with only 14% of measurements being left-censored showed that model parameters were largely biased (from -55% to +133% according to the parameter with the exception of the estimate of initial outcome value when left-censored viral load values are replaced by the value of the threshold. When left-censoring was taken into account, the relative bias on fixed effects was equal or less than 2%. Then, parameters were estimated using the 100 measurements of HCV RNA available (with 12% of left-censored values during the first 4 weeks following treatment initiation in the 17 patients included in the trial. Differences between estimates according to the method used were clinically significant, particularly on the death rate of infected cells. With the crude approach the estimate was 0.13 day-1 (95% confidence interval [CI]: 0.11; 0.17 compared to 0.19 day-1 (CI: 0.14; 0.26 when taking into account left-censoring. The relative differences between

  8. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    CERN Document Server

    Albers, DJ

    2011-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  9. Parameter estimation of general regression neural network using Bayesian approach

    Science.gov (United States)

    Choir, Achmad Syahrul; Prasetyo, Rindang Bangun; Ulama, Brodjol Sutijo Suprih; Iriawan, Nur; Fitriasari, Kartika; Dokhi, Mohammad

    2016-02-01

    General Regression Neural Network (GRNN) has been applied in a large number of forecasting/prediction problem. Generally, there are two types of GRNN: GRNN which is based on kernel density; and Mixture Based GRNN (MBGRNN) which is based on adaptive mixture model. The main problem on GRNN modeling lays on how its parameters were estimated. In this paper, we propose Bayesian approach and its computation using Markov Chain Monte Carlo (MCMC) algorithms for estimating the MBGRNN parameters. This method is applied in simulation study. In this study, its performances are measured by using MAPE, MAE and RMSE. The application of Bayesian method to estimate MBGRNN parameters using MCMC is straightforward but it needs much iteration to achieve convergence.

  10. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  11. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    Directory of Open Access Journals (Sweden)

    Baker Syed

    2011-01-01

    Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.

  12. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    Science.gov (United States)

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution.

  13. Cosmological Parameter Estimation and Window Function in Counts-in-Cell Analysis

    Science.gov (United States)

    Murata, Y.; Matsubara, T.

    2006-11-01

    We estimate the cosmological parameter bounds expected from the counts-in-cells analysis of the galaxy distributions of SDSS samples, which are the Main Galaxies (MGs) and the Luminous Red Galaxies (LRGs). We use the m-weight Epanechnikov kernel as window function with expectation of improving the bounds of parameters. We apply the Fisher Information Matrix Analysis, which can estimate the minimum expected parameter bounds without any data. In this analysis, we derive the covariance matrix that includes the consideration of overlapping of cells. As a result, we found that the signal to noise of the LRG sample is bigger than that of the MG sample because the range of data using is only linear scale. Therefore, the LRG sample is more suitable for parameter estimation. For the LRG sample, about six hundred data points are sufficient to get maximum effect on parameter bounds. Large parameter set results in poor bounds because of degeneracy, the matter density, the baryon fraction, the neutrino density and σ2 8 including the amplitude of the power spectrum, the linear bias and the Kaiser effect seems to be an appropriate set.

  14. Parameter estimation with an iterative version of the adaptive Gaussian mixture filter

    Science.gov (United States)

    Stordal, A.; Lorentzen, R.

    2012-04-01

    The adaptive Gaussian mixture filter (AGM) was introduced in Stordal et. al. (ECMOR 2010) as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one pre-determined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decrease with increasing sample size, the filter can achieve asymptotic optimality. For large scale applications with a limited sample size the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of

  15. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  16. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Jieming Ma

    2013-01-01

    Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.

  17. Bias and robustness of uncertainty components estimates in transient climate projections

    Science.gov (United States)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias

  18. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    S C Raisinghani; A K Ghosh

    2000-04-01

    Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic variables ortheirtime derivatives, is proposed. Furthermore, it is shown that such a neural model can be used to extract equivalent stability and control derivatives of a flexible aircraft. Results are presented for aircraft with different levels of flexibility to demonstrate the utility ofthe neural approach for both modelling and estimation of parameters.

  19. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  20. Estimation of octanol/water partition coefficients using LSER parameters

    Science.gov (United States)

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  1. Parameter Estimation in Stochastic Differential Equations; An Overview

    DEFF Research Database (Denmark)

    Nielsen, Jan Nygaard; Madsen, Henrik; Young, P. C.

    2000-01-01

    This paper presents an overview of the progress of research on parameter estimation methods for stochastic differential equations (mostly in the sense of Ito calculus) over the period 1981-1999. These are considered both without measurement noise and with measurement noise, where the discretely...... observed stochastic differential equations are embedded in a continuous-discrete time state space model. Every attempts has been made to include results from other scientific disciplines. Maximum likelihood estimation of parameters in nonlinear stochastic differential equations is in general not possible...

  2. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  3. Estimation of regional pulmonary perfusion parameters from microfocal angiograms

    Science.gov (United States)

    Clough, Anne V.; Al-Tinawi, Amir; Linehan, John H.; Dawson, Christopher A.

    1995-05-01

    An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretical methods for determining regional blood flow, volume, and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries, and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time- absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.

  4. Estimation of diffusion parameters for discretely observed diffusion processes

    OpenAIRE

    Sørensen, Helle

    2002-01-01

    We study the estimation of diffusion parameters for one-dimensional, ergodic diffusion processes that are discretely observed. We discuss a method based on a functional relationship between the drift function, the diffusion function and the invariant density and use empirical process theory to show that the estimator is $\\sqrt{n}$-consistent and in certain cases weakly convergent. The Chan-Karolyi-Longstaff-Sanders (CKLS) model is used as an example and a numerical example i...

  5. Optimum location of sensors used for mould parameters estimation

    OpenAIRE

    E. Majchrzak; J. Mendakiewicz

    2010-01-01

    Heat transfer processes proceeding in the system casting-mould-environment are considered. In particular, the inverse problem connected with the estimation of thermal conductivity and volumetric specific heat of mould material is presented. To estimate the parameters, the additional information concerning the temperature history at the points selected from domain considered is necessary. The essential problem is a proper choice of sensors localization. The application of sensitivity analysis ...

  6. Comparison of Jump-Diffusion Parameters Using Passage Times Estimation

    Directory of Open Access Journals (Sweden)

    K. Khaldi

    2014-01-01

    Full Text Available The main purposes of this paper are two contributions: (1 it presents a new method, which is the first passage time generalized for all passage times (PT method, in order to estimate the parameters of stochastic jump-diffusion process. (2 It compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the PT method and those obtained by the moments method applied to the MJD model.

  7. Human ECG signal parameters estimation during controlled physical activity

    Science.gov (United States)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  8. Misleading population estimates: biases and consistency of visual surveys and matrix modelling in the endangered bearded vulture.

    Directory of Open Access Journals (Sweden)

    Antoni Margalida

    Full Text Available Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the "cryptic" fraction of the population and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (Gypaetus barbatus population from visual surveys performed at supplementary feeding stations with data derived from population matrix-modelling approximations. Our results suggest that visual surveys overestimate the number of immature (6 y.o. were underestimated in comparison with the predictions of a population model using a stable-age distribution. In addition, we found that visual surveys did not provide conclusive information on true variations in the size of the focal population. Our results suggest that although long-term studies (i.e. population matrix modelling based on capture-recapture procedures are a more time-consuming method, they provide more reliable and robust estimates of population parameters needed in designing and applying conservation strategies. The findings shown here are likely transferable to the management and conservation of other long-lived vertebrate populations that share similar life-history traits and ecological requirements.

  9. Ocean wave parameters and spectrum estimated from single and dual high-frequency radar systems

    Science.gov (United States)

    Hisaki, Yukiharu

    2016-09-01

    The high-frequency (HF) radar inversion algorithm for spectrum estimation (HIAS) can estimate ocean wave directional spectra from both dual and single radar. Wave data from a dual radar and two single radars are compared with in situ observations. The agreement of the wave parameters estimated from the dual radar with those from in situ observations is the best of the three. In contrast, the agreement of the wave parameters estimated from the single radar in which no Doppler spectra are observed in the cell closest to the in situ observation point is the worst among the three. Wave data from the dual radar and the two single radars are compared. The comparison of the wave heights estimated from the single and dual radars shows that the area sampled by the Doppler spectra for the single radar is more critical than the number of Doppler spectra in terms of agreement with the dual-radar-estimated wave heights. In contrast, the comparison of the wave periods demonstrates that the number of Doppler spectra observed by the single radar is more critical for agreement of the wave periods than the area of the Doppler spectra. There is a bias directed to the radar position in the single radar estimated wave direction.

  10. Error and bias in size estimates of whale sharks: implications for understanding demography.

    Science.gov (United States)

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.

  11. Error and bias in size estimates of whale sharks: implications for understanding demography.

    Science.gov (United States)

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  12. Error and bias in size estimates of whale sharks: implications for understanding demography

    Science.gov (United States)

    Sequeira, Ana M. M.; Thums, Michele; Brooks, Kim; Meekan, Mark G.

    2016-01-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  13. Optimal measurement locations for parameter estimation of non linear distributed parameter systems

    Directory of Open Access Journals (Sweden)

    J. E. Alaña

    2010-12-01

    Full Text Available A sensor placement approach for the purpose of accurately estimating unknown parameters of a distributed parameter system is discussed. The idea is to convert the sensor location problem to a classical experimental design. The technique consists of analysing the extrema values of the sensitivity coefficients derived from the system and their corresponding spatial positions. This information is used to formulate an efficient computational optimum experiment design on discrete domains. The scheme studied is verified by a numerical example regarding the chemical reaction in a tubular reactor for two possible scenarios; stable and unstable operation conditions. The resulting approach is easy to implement and good estimates for the parameters of the system are obtained. This study shows that the measurement location plays an essential role in the parameter estimation procedure.

  14. GLONASS fractional-cycle bias estimation across inhomogeneous receivers for PPP ambiguity resolution

    Science.gov (United States)

    Geng, Jianghui; Bock, Yehuda

    2016-04-01

    The key issue to enable precise point positioning with ambiguity resolution (PPP-AR) is to estimate fractional-cycle biases (FCBs), which mainly relate to receiver and satellite hardware biases, over a network of reference stations. While this has been well achieved for GPS, FCB estimation for GLONASS is difficult because (1) satellites do not share the same frequencies as a result of Frequency Division Multiple Access (FDMA) signals; (2) and even worse, pseudorange hardware biases of receivers vary in an irregular manner with manufacturers, antennas, domes, firmware, etc., which especially complicates GLONASS PPP-AR over inhomogeneous receivers. We propose a general approach where external ionosphere products are introduced into GLONASS PPP to estimate precise FCBs that are less impaired by pseudorange hardware biases of diverse receivers to enable PPP-AR. One month of GLONASS data at about 550 European stations were processed. From an exemplary network of 51 inhomogeneous receivers, including four receiver types with various antennas and spanning about 800 km in both longitudinal and latitudinal directions, we found that 92.4 % of all fractional parts of GLONASS wide-lane ambiguities agree well within ± 0.15 cycles with a standard deviation of 0.09 cycles if global ionosphere maps (GIMs) are introduced, compared to only 51.7 % within ± 0.15 cycles and a larger standard deviation of 0.22 cycles otherwise. Hourly static GLONASS PPP-AR at 40 test stations can reach position estimates of about 1 and 2 cm in RMS from ground truth for the horizontal and vertical components, respectively, which is comparable to hourly GPS PPP-AR. Integrated GLONASS and GPS PPP-AR can further achieve an RMS of about 0.5 cm in horizontal and 1-2 cm in vertical components. We stress that the performance of GLONASS PPP-AR across inhomogeneous receivers depends on the accuracy of ionosphere products. GIMs have a modest accuracy of only 2-8 TECU (Total Electron Content Unit) in vertical

  15. An own-age bias in age estimation of faces in children and adults.

    OpenAIRE

    Moyse, Evelyne; Brédart, Serge

    2010-01-01

    The aim of the present study was to assess the occurence of an own-age bias on age estimation performance (better performance for faces from the same age range as that of the beholder) by using an experimental design inspired from research on the own-race effect. The age of participants (10 to 14 year old children and 20 to 30 year old adults) was an independent factor that was crossed with the age of the stimuli (faces of 10 to 14 year old children and faces of 20 to 30 year old adults), the...

  16. Low Complexity Parameter Estimation For Off-the-Grid Targets

    KAUST Repository

    Jardak, Seifallah

    2015-10-05

    In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.

  17. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  18. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....

  19. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  20. On Modal Parameter Estimates from Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Agneni, A.; Brincker, Rune; Coppotelli, B.

    2004-01-01

    Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...

  1. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  2. A parameter estimation framework for patient-specific hemodynamic computations

    Science.gov (United States)

    Itu, Lucian; Sharma, Puneet; Passerini, Tiziano; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2015-01-01

    We propose a fully automated parameter estimation framework for performing patient-specific hemodynamic computations in arterial models. To determine the personalized values of the windkessel models, which are used as part of the geometrical multiscale circulation model, a parameter estimation problem is formulated. Clinical measurements of pressure and/or flow-rate are imposed as constraints to formulate a nonlinear system of equations, whose fixed point solution is sought. A key feature of the proposed method is a warm-start to the optimization procedure, with better initial solution for the nonlinear system of equations, to reduce the number of iterations needed for the calibration of the geometrical multiscale models. To achieve these goals, the initial solution, computed with a lumped parameter model, is adapted before solving the parameter estimation problem for the geometrical multiscale circulation model: the resistance and the compliance of the circulation model are estimated and compensated. The proposed framework is evaluated on a patient-specific aortic model, a full body arterial model, and multiple idealized anatomical models representing different arterial segments. For each case it leads to the best performance in terms of number of iterations required for the computational model to be in close agreement with the clinical measurements.

  3. Online vegetation parameter estimation using passive microwave remote sensing observations

    Science.gov (United States)

    In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...

  4. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  5. A parameter identifiability and estimation study in Yesilirmak River.

    Science.gov (United States)

    Berber, R; Yuceer, M; Karadurmus, E

    2009-01-01

    Water quality models have relatively large number of parameters, which need to be estimated against observed data through a non-trivial task that is associated with substantial difficulties. This work involves a systematic model calibration and validation study for river water quality. The model considered was composed of dynamic mass balances for eleven pollution constituents, stemming from QUAL2E water quality model by considering a river segment as a series of continuous stirred-tank reactors (CSTRs). Parameter identifiability was analyzed from the perspective of sensitivity measure and collinearity index, which indicated that 8 parameters would fall within the identifiability range. The model parameters were then estimated by an integration based optimization algorithm coupled with sequential quadratic programming. Dynamic field data consisting of major pollutant concentrations were collected from sampling stations along Yesilirmak River around the city of Amasya in Turkey, and compared with model predictions. The calibrated model responses were in good agreement with the observed river water quality data, and this indicated that the suggested procedure provided an effective means for reliable estimation of model parameters and dynamic simulation for river streams. PMID:19214006

  6. Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images

    Science.gov (United States)

    Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.

    2016-04-01

    Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.

  7. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  8. PARAMETER ESTIMATION METHODOLOGY FOR NONLINEAR SYSTEMS: APPLICATION TO INDUCTION MOTOR

    Institute of Scientific and Technical Information of China (English)

    G.KENNE; F.FLORET; H.NKWAWO; F.LAMNABHI-LAGARRIGUE

    2005-01-01

    This paper deals with on-line state and parameter estimation of a reasonably large class of nonlinear continuous-time systems using a step-by-step sliding mode observer approach. The method proposed can also be used for adaptation to parameters that vary with time. The other interesting feature of the method is that it is easily implementable in real-time. The efficiency of this technique is demonstrated via the on-line estimation of the electrical parameters and rotor flux of an induction motor. This application is based on the standard model of the induction motor expressed in rotor coordinates with the stator current and voltage as well as the rotor speed assumed to be measurable.Real-time implementation results are then reported and the ability of the algorithm to rapidly estimate the motor parameters is demonstrated. These results show the robustness of this approach with respect to measurement noise, discretization effects, parameter uncertainties and modeling inaccuracies.Comparisons between the results obtained and those of the classical recursive least square algorithm are also presented. The real-time implementation results show that the proposed algorithm gives better performance than the recursive least square method in terms of the convergence rate and the robustness with respect to measurement noise.

  9. Modal parameters estimation using ant colony optimisation algorithm

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  10. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Science.gov (United States)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  11. Power Network Parameter Estimation Method Based on Data Mining Technology

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian

    2008-01-01

    The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.

  12. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  13. Being surveyed can change later behavior and related parameter estimates.

    Science.gov (United States)

    Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Karlan, Dean S; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit

    2011-02-01

    Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314

  14. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    Science.gov (United States)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  15. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  16. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  17. Bias in regression coefficient estimates when assumptions for handling missing data are violated: a simulation study

    Directory of Open Access Journals (Sweden)

    Sander MJ van Kuijk

    2016-03-01

    Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.

  18. Global parameter estimation of the Cochlodinium polykrikoides model using bioassay data

    Institute of Scientific and Technical Information of China (English)

    CHO Hong-Yeon; PARK Kwang-Soon; KIM Sung

    2016-01-01

    Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide (a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of theC. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of theC. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using thein-situ monitoring data is highly required. As opposed to the bioassay

  19. Concurrent learning for parameter estimation using dynamic state-derivative estimators

    OpenAIRE

    Kamalapurkar, Rushikesh; Reish, Ben; Chowdhary, Girish; Dixon, Warren E.

    2015-01-01

    A concurrent learning (CL)-based parameter estimator is developed to identify the unknown parameters in a linearly parameterized uncertain control-affine nonlinear system. Unlike state-of-the-art CL techniques that assume knowledge of the state-derivative or rely on numerical smoothing, CL is implemented using a dynamic state-derivative estimator. A novel purging algorithm is introduced to discard possibly erroneous data recorded during the transient phase for concurrent learning. Since purgi...

  20. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    Science.gov (United States)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  1. Multipath Parameter Estimation from OFDM Signals in Mobile Channels

    CERN Document Server

    Letzepis, Nick; Haley, David

    2010-01-01

    We study multipath parameter estimation from orthogonal frequency division multiplex signals transmitted over doubly dispersive mobile radio channels. We are interested in cases where the transmission is long enough to suffer time selectivity, but short enough such that the time variation can be accurately modeled as depending only on per-tap linear phase variations due to Doppler effects. We therefore concentrate on the estimation of the complex gain, delay and Doppler offset of each tap of the multipath channel impulse response. We show that the frequency domain channel coefficients for an entire packet can be expressed as the superimposition of two-dimensional complex sinusoids. The maximum likelihood estimate requires solution of a multidimensional non-linear least squares problem, which is computationally infeasible in practice. We therefore propose a low complexity suboptimal solution based on iterative successive and parallel cancellation. First, initial delay/Doppler estimates are obtained via success...

  2. Non-sedating antihistamine drugs and cardiac arrhythmias -- biased risk estimates from spontaneous reporting systems?

    DEFF Research Database (Denmark)

    De Bruin, M L; van Puijenbroek, E P; Egberts, A C G;

    2002-01-01

    AIMS: This study used spontaneous reports of adverse events to estimate the risk for developing cardiac arrhythmias due to the systemic use of non-sedating antihistamine drugs and compared the risk estimate before and after the regulatory action to recall the over-the-counter status of some of th....... Our findings, however, strongly suggest that the increased risk identified can at least partly be explained by reporting bias as a result of publications about and mass media attention for antihistamine induced arrhythmias.......AIMS: This study used spontaneous reports of adverse events to estimate the risk for developing cardiac arrhythmias due to the systemic use of non-sedating antihistamine drugs and compared the risk estimate before and after the regulatory action to recall the over-the-counter status of some...... was not significantly higher than 1 (OR 1.37 [95% CI: 0.85, 2.23]), whereas the risk estimate calculated after the governmental decision did significantly differ from 1 (OR 4.19 [95% CI: 2.49, 7.05]). CONCLUSIONS: Our data suggest that non-sedating antihistamines might have an increased risk for inducing arrhythmias...

  3. Cosmological parameter estimation using Particle Swarm Optimization (PSO)

    CERN Document Server

    Prasad, Jayanti

    2011-01-01

    Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...

  4. Anisotropic parameter estimation using velocity variation with offset analysis

    Energy Technology Data Exchange (ETDEWEB)

    Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A. [Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesa 10, Bandung, 40132 (Indonesia)

    2013-09-09

    Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.

  5. Estimation of common cause failure parameters with periodic tests

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)

    2009-04-15

    In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.

  6. Informed spectral analysis: audio signal parameter estimation using side information

    Science.gov (United States)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  7. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.

  8. Estimation of atmospheric parameters from time-lapse imagery

    Science.gov (United States)

    McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.

    2016-05-01

    A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.

  9. Terrain mechanical parameters online estimation for lunar rovers

    Science.gov (United States)

    Liu, Bing; Cui, Pingyuan; Ju, Hehua

    2007-11-01

    This paper presents a new method for terrain mechanical parameters estimation for a wheeled lunar rover. First, after deducing the detailed distribution expressions of normal stress and sheer stress at the wheel-terrain interface, the force/torque balance equations of the drive wheel for computing terrain mechanical parameters is derived through analyzing the rigid drive wheel of a lunar rover which moves with uniform speed in deformable terrain. Then a two-points Guass-Lengendre numerical integral method is used to simplify the balance equations, after simplifying and rearranging the resolve model are derived which are composed of three non-linear equations. Finally the iterative method of Newton and the steepest descent method are combined to solve the non-linear equations, and the outputs of on-board virtual sensors are used for computing terrain key mechanical parameters i.e. internal friction angle and press-sinkage parameters. Simulation results show correctness under high noises disturbance and effectiveness with low computational complexity, which allows a lunar rover for online terrain mechanical parameters estimation.

  10. Parameter estimation for stiff equations of biosystems using radial basis function networks

    Directory of Open Access Journals (Sweden)

    Sugimoto Masahiro

    2006-04-01

    Full Text Available Abstract Background The modeling of dynamic systems requires estimating kinetic parameters from experimentally measured time-courses. Conventional global optimization methods used for parameter estimation, e.g. genetic algorithms (GA, consume enormous computational time because they require iterative numerical integrations for differential equations. When the target model is stiff, the computational time for reaching a solution increases further. Results In an attempt to solve this problem, we explored a learning technique that uses radial basis function networks (RBFN to achieve a parameter estimation for biochemical models. RBFN reduce the number of numerical integrations by replacing derivatives with slopes derived from the distribution of searching points. To introduce a slight search bias, we implemented additional data selection using a GA that searches data-sparse areas at low computational cost. In addition, we adopted logarithmic transformation that smoothes the fitness surface to obtain a solution simply. We conducted numerical experiments to validate our methods and compared the results with those obtained by GA. We found that the calculation time decreased by more than 50% and the convergence rate increased from 60% to 90%. Conclusion In this work, our RBFN technique was effective for parameter optimization of stiff biochemical models.

  11. J-A Hysteresis Model Parameters Estimation using GA

    Directory of Open Access Journals (Sweden)

    Bogomir Zidaric

    2005-01-01

    Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.

  12. On an algebraic method for derivatives estimation and parameter estimation for partial derivatives systems

    OpenAIRE

    Ushirobira, Rosane; Korporal, Anja; PERRUQUETTI, Wilfrid

    2014-01-01

    International audience — In this communication, we discuss two estimation problems dealing with partial derivatives systems. Namely, estimating partial derivatives of a multivariate noisy signal and identifying parameters of partial differential equations. The multivariate noisy signal is expressed as a truncated Taylor expression in a small time interval. An algebraic method can be then used to estimate its partial derivatives in the opera-tional domain. The same approach applies for the ...

  13. Probabilistic estimation of the constitutive parameters of polymers

    Directory of Open Access Journals (Sweden)

    Siviour C.R.

    2012-08-01

    Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

  14. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  15. CosmoSIS: A System for MC Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  16. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  17. Inter-system biases estimation in multi-GNSS relative positioning with GPS and Galileo

    Science.gov (United States)

    Deprez, Cecile; Warnant, Rene

    2016-04-01

    The recent increase in the number of Global Navigation Satellite Systems (GNSS) opens new perspectives in the field of high precision positioning. Particularly, the European Galileo program has experienced major progress in 2015 with the launch of 6 satellites belonging to the new Full Operational Capability (FOC) generation. Associated with the ongoing GPS modernization, many more frequencies and satellites are now available. Therefore, multi-GNSS relative positioning based on GPS and Galileo overlapping frequencies should entail better accuracy and reliability in position estimations. However, the differences between satellite systems induce inter-system biases (ISBs) inside the multi-GNSS equations of observation. Once these biases estimated and removed from the model, a solution involving a unique pivot satellite for the two considered constellations can be obtained. Such an approach implies that the addition of even one single Galileo satellite to the GPS-only model will strengthen it. The combined use of L1 and L5 from GPS with E1 and E5a from Galileo in zero baseline double differences (ZB DD) based on a unique pivot satellite is employed to resolve ISBs. This model removes all the satellite- and receiver-dependant error sources by differentiating and the zero baseline configuration allows atmospheric and multipath effects elimination. An analysis of the long-term stability of ISBs is conducted on various pairs of receivers over large time spans. The possible influence of temperature variations inside the receivers over ISB values is also investigated. Our study is based on the 5 multi-GNSS receivers (2 Septentrio PolaRx4, 1 Septentrio PolaRxS and 2 Trimble NetR9) installed on the roof of our building in Liege. The estimated ISBs are then used as corrections in the multi-GNSS observation model and the resulting accuracy of multi-GNSS positioning is compared to GPS and Galileo standalone solutions.

  18. On optimal detection and estimation of the FCN parameters

    Science.gov (United States)

    Yatskiv, Y.

    2009-09-01

    Statistical approach for detection and estimation of parameters of short-term quasi- periodic processes was used in order to investigate the Free Core Nutation (FCN) signal in the Celestial Pole Offset (CPO). The results show that this signal is very unstable and that it disappeared in year 2000. The amplitude of oscillation with period of about 435 days is larger for dX as compared with that for dY .

  19. Estimation of Secondary Meteorological Parameters Using Mining Data Techniques

    OpenAIRE

    Rosabel Zerquera Díaz; Ayleen Morales Montejo; Gil Cruz Lemus; Alejandro Rosete Suárez

    2010-01-01

    This work develops a process of Knowledge Discovery in Databases (KDD) at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA) in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes...

  20. Iterative importance sampling algorithms for parameter estimation problems

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one approximates a posterior distribution over uncertain param- eters defined jointly by a prior distribution, a numerical model, and noisy data. Typically, Markov Chain Monte Carlo (MCMC) is used for the numerical solution of such problems. An alternative to MCMC is importance sampling, where one draws samples from a proposal distribution, and attaches weights to each sample to account for the fact that the proposal distribution is not the posterior distribut...

  1. Estimation of Parameters in Mean-Reverting Stochastic Systems

    OpenAIRE

    2014-01-01

    Stochastic differential equation (SDE) is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory...

  2. Parameter estimation for fractional birth and fractional death processes

    OpenAIRE

    Cahoy, Dexter O.; Polito, Federico

    2013-01-01

    The fractional birth and the fractional death processes are more desirable in practice than their classical counterparts as they naturally provide greater flexibility in modeling growing and decreasing systems. In this paper, we propose formal parameter estimation procedures for the fractional Yule, the fractional linear death, and the fractional sublinear death processes. The methods use all available data possible, are computationally simple and asymptotically unbiased. The procedures explo...

  3. Estimation of water diffusivity parameters on grape dynamic drying

    OpenAIRE

    Ramos, Inês N.; Miranda, João M.R.; Brandão, Teresa R. S.; Cristina L.M. Silva

    2010-01-01

    A computer program was developed, aiming at estimating water diffusivity parameters in a dynamic drying process with grapes, assessing the predictability of corresponding non-isothermal drying curves. It numerically solves Fick’s second law for a sphere, by explicit finite differences, in a shrinking system, with anisotropic properties and changing boundary conditions. Experiments were performed in a pilot convective dryer, with simulated air conditions observed in a solar dryer, for modellin...

  4. Multi-criteria parameter estimation for the unified land model

    Directory of Open Access Journals (Sweden)

    B. Livneh

    2012-04-01

    Full Text Available We describe a parameter estimation framework for the Unified Land Model (ULM that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET product based on MODerate resolution Imaging Spectroradiometer (MODIS and Geostationary Operation Environmental Satellites (GOES imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR atmospheric fields, terrestrial water storage content (TWSC data from the Gravity Recovery and Climate Experiment (GRACE, and streamflow (Q primarily from the United States Geological Survey (USGS stream gauges. The study domain includes 10 large-scale (≥105 km2 river basins and 250 smaller-scale (<104 km2 tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting model, is the basis for these experiments. Calibrations were made using each of the criteria individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large-scales calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under estimation of low (high flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  5. Multi-criteria parameter estimation for the Unified Land Model

    Directory of Open Access Journals (Sweden)

    B. Livneh

    2012-08-01

    Full Text Available We describe a parameter estimation framework for the Unified Land Model (ULM that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET product based on MODerate resolution Imaging Spectroradiometer (MODIS and Geostationary Operational Environmental Satellites (GOES imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR atmospheric fields, terrestrial water storage content (TWSC data from the Gravity Recovery and Climate Experiment (GRACE, and streamflow (Q primarily from the United States Geological Survey (USGS stream gauges. The study domain includes 10 large-scale (≥105 km2 river basins and 250 smaller-scale (<104 km2 tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting Model, is the basis for these experiments. Calibrations were made using each of the data sets individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large scales, calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under estimation of low (high flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  6. Estimation of stellar atmospheric parameters from SDSS/SEGUE spectra

    Science.gov (United States)

    Re Fiorentin, P.; Bailer-Jones, C. A. L.; Lee, Y. S.; Beers, T. C.; Sivarani, T.; Wilhelm, R.; Allende Prieto, C.; Norris, J. E.

    2007-06-01

    We present techniques for the estimation of stellar atmospheric parameters (T_eff, log~g, [Fe/H]) for stars from the SDSS/SEGUE survey. The atmospheric parameters are derived from the observed medium-resolution (R = 2000) stellar spectra using non-linear regression models trained either on (1) pre-classified observed data or (2) synthetic stellar spectra. In the first case we use our models to automate and generalize parametrization produced by a preliminary version of the SDSS/SEGUE Spectroscopic Parameter Pipeline (SSPP). In the second case we directly model the mapping between synthetic spectra (derived from Kurucz model atmospheres) and the atmospheric parameters, independently of any intermediate estimates. After training, we apply our models to various samples of SDSS spectra to derive atmospheric parameters, and compare our results with those obtained previously by the SSPP for the same samples. We obtain consistency between the two approaches, with RMS deviations on the order of 150 K in T_eff, 0.35 dex in log~g, and 0.22 dex in [Fe/H]. The models are applied to pre-processed spectra, either via Principal Component Analysis (PCA) or a Wavelength Range Selection (WRS) method, which employs a subset of the full 3850-9000Å spectral range. This is both for computational reasons (robustness and speed), and because it delivers higher accuracy (better generalization of what the models have learned). Broadly speaking, the PCA is demonstrated to deliver more accurate atmospheric parameters when the training data are the actual SDSS spectra with previously estimated parameters, whereas WRS appears superior for the estimation of log~g via synthetic templates, especially for lower signal-to-noise spectra. From a subsample of some 19 000 stars with previous determinations of the atmospheric parameters, the accuracies of our predictions (mean absolute errors) for each parameter are T_eff to 170/170 K, log~g to 0.36/0.45 dex, and [Fe/H] to 0.19/0.26 dex, for methods (1

  7. Estimating hydraulic parameters when poroelastic effects are significant.

    Science.gov (United States)

    Berg, Steven J; Hsieh, Paul A; Illman, Walter A

    2011-01-01

    For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. PMID:21204832

  8. Estimating Hydraulic Parameters When Poroelastic Effects Are Significant

    Science.gov (United States)

    Berg, S.J.; Hsieh, P.A.; Illman, W.A.

    2011-01-01

    For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.

  9. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  10. Estimation of multiexponential fluorescence decay parameters using compressive sensing.

    Science.gov (United States)

    Yang, Sejung; Lee, Joohyun; Lee, Youmin; Lee, Minyung; Lee, Byung-Uk

    2015-09-01

    Fluorescence lifetime imaging microscopy (FLIM) is a microscopic imaging technique to present an image of fluorophore lifetimes. It circumvents the problems of typical imaging methods such as intensity attenuation from depth since a lifetime is independent of the excitation intensity or fluorophore concentration. The lifetime is estimated from the time sequence of photon counts observed with signal-dependent noise, which has a Poisson distribution. Conventional methods usually estimate single or biexponential decay parameters. However, a lifetime component has a distribution or width, because the lifetime depends on macromolecular conformation or inhomogeneity. We present a novel algorithm based on a sparse representation which can estimate the distribution of lifetime. We verify the enhanced performance through simulations and experiments.

  11. Learn-As-You-Go Acceleration of Cosmological Parameter Estimates

    CERN Document Server

    Aslanyan, Grigor; Price, Layne C

    2015-01-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of $\\Lambda$CDM posterior probabilities. The computation is significantly accelerated wit...

  12. Estimating demographic parameters using hidden process dynamic models.

    Science.gov (United States)

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  13. Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition

    Directory of Open Access Journals (Sweden)

    Yuxing Mao

    2014-06-01

    Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine- invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.

  14. Estimation of Medium Voltage Cable Parameters for PD Detection

    DEFF Research Database (Denmark)

    Villefrance, Rasmus; Holbøll, Joachim T.; Henriksen, Mogens

    1998-01-01

    Medium voltage cable characteristics have been determined with respect to the parameters having influence on the evaluation of results from PD-measurements on paper/oil and XLPE-cables. In particular, parameters essential for discharge quantification and location were measured. In order to relate...... a measured signal at the cable terminations to a specific PD-amplitude and location on the cable, the attenuation and the transmission speed of PD-pulses on the cable have to be known. Consequently, the main parameter to be determined is the complex propagation constant which consists of the attenuation...... and phase constants. A method to estimate this propagation constant, based on high frequency measurements, will be presented and will be applied to different cable types under different conditions. The influence of temperature and test voltage was investigated. The relevance of the results for cable...

  15. Parameter estimation in a spatial unit root autoregressive model

    CERN Document Server

    Baran, Sándor

    2011-01-01

    Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.

  16. Likelihood transform: making optimization and parameter estimation easier

    CERN Document Server

    Wang, Yan

    2014-01-01

    Parameterized optimization and parameter estimation is of great importance in almost every branch of modern science, technology and engineering. A practical issue in the problem is that when the parameter space is large and the available data is noisy, the geometry of the likelihood surface in the parameter space will be complicated. This makes searching and optimization algorithms computationally expensive, sometimes even beyond reach. In this paper, we define a likelihood transform which can make the structure of the likelihood surface much simpler, hence reducing the intrinsic complexity and easing optimization significantly. We demonstrate the properties of likelihood transform by apply it to a simplified gravitational wave chirp signal search. For the signal with an signal-to-noise ratio 20, likelihood transform has made a deterministic template-based search possible for the first time, which turns out to be 1000 times more efficient than an exhaustive grid- based search. The method in principle can be a...

  17. Genes with minimal phylogenetic information are problematic for coalescent analyses when gene tree estimation is biased.

    Science.gov (United States)

    Xi, Zhenxiang; Liu, Liang; Davis, Charles C

    2015-11-01

    The development and application of coalescent methods are undergoing rapid changes. One little explored area that bears on the application of gene-tree-based coalescent methods to species tree estimation is gene informativeness. Here, we investigate the accuracy of these coalescent methods when genes have minimal phylogenetic information, including the implementation of the multilocus bootstrap approach. Using simulated DNA sequences, we demonstrate that genes with minimal phylogenetic information can produce unreliable gene trees (i.e., high error in gene tree estimation), which may in turn reduce the accuracy of species tree estimation using gene-tree-based coalescent methods. We demonstrate that this problem can be alleviated by sampling more genes, as is commonly done in large-scale phylogenomic analyses. This applies even when these genes are minimally informative. If gene tree estimation is biased, however, gene-tree-based coalescent analyses will produce inconsistent results, which cannot be remedied by increasing the number of genes. In this case, it is not the gene-tree-based coalescent methods that are flawed, but rather the input data (i.e., estimated gene trees). Along these lines, the commonly used program PhyML has a tendency to infer one particular bifurcating topology even though it is best represented as a polytomy. We additionally corroborate these findings by analyzing the 183-locus mammal data set assembled by McCormack et al. (2012) using ultra-conserved elements (UCEs) and flanking DNA. Lastly, we demonstrate that when employing the multilocus bootstrap approach on this 183-locus data set, there is no strong conflict between species trees estimated from concatenation and gene-tree-based coalescent analyses, as has been previously suggested by Gatesy and Springer (2014).

  18. Estimates of genetic parameters for fat yield in Murrah buffaloes

    Directory of Open Access Journals (Sweden)

    Manoj Kumar

    2016-03-01

    Full Text Available Aim: The present study was performed to investigate the effect of genetic and non-genetic factors affecting milk fat yield and to estimate genetic parameters of monthly test day fat yields (MTDFY and lactation 305-day fat yield (L305FY in Murrah buffaloes. Materials and Methods: The data on total of 10381 MTDFY records comprising the first four lactations of 470 Murrah buffaloes calved from 1993 to 2014 were assessed. These buffaloes were sired by 75 bulls maintained in an organized farm at ICAR-National Dairy Research Institute, Karnal. Least squares maximum likelihood program was used to estimate genetic and non-genetic parameters. Heritability estimates were obtained using paternal half-sib correlation method. Genetic and phenotypic correlations among MTDFY, and 305-day fat yield were calculated from the analysis of variance and covariance matrix among sire groups. Results: The overall least squares mean of L305FY was found to be 175.74±4.12 kg. The least squares mean of overall MTDFY ranged from 3.33±0.14 kg (TD-11 to 7.06±0.17 kg (TD-3. The h2 estimate of L305FY was found to be 0.33±0.16 in this study. The estimates of phenotypic and genetic correlations between 305-day fat yield and different MTDFY ranged from 0.32 to 0.48 and 0.51 to 0.99, respectively. Conclusions: In this study, all the genetic and non-genetic factors except age at the first calving group, significantly affected the traits under study. The estimates of phenotypic and genetic correlations of MTDFY with 305-day fat yield was generally higher in the MTDFY-5 of lactation suggesting that this TD yields could be used as the selection criteria for early evaluation and selection of Murrah buffaloes.

  19. Estimation and Bias Correction of Aerosol Abundance using Data-driven Machine Learning and Remote Sensing

    Science.gov (United States)

    Malakar, Nabin K.; Lary, D. L.; Moore, A.; Gencaga, D.; Roscoe, B.; Albayrak, Arif; Petrenko, Maksym; Wei, Jennifer

    2012-01-01

    Air quality information is increasingly becoming a public health concern, since some of the aerosol particles pose harmful effects to peoples health. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. The comparison between the AOD measured from the ground-based Aerosol Robotic Network (AERONET) system and the satellite MODIS instruments at 550 nm shows that there is a bias between the two data products. We performed a comprehensive analysis exploring possible factors which may be contributing to the inter-instrumental bias between MODIS and AERONET. The analysis used several measured variables, including the MODIS AOD, as input in order to train a neural network in regression mode to predict the AERONET AOD values. This not only allowed us to obtain an estimate, but also allowed us to infer the optimal sets of variables that played an important role in the prediction. In addition, we applied machine learning to infer the global abundance of ground level PM2.5 from the AOD data and other ancillary satellite and meteorology products. This research is part of our goal to provide air quality information, which can also be useful for global epidemiology studies.

  20. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  1. Are risk estimates biased in follow-up studies of psychosocial factors with low base-line participation?

    DEFF Research Database (Denmark)

    Kaerlev, Linda; Kolstad, Henrik A; Hansen, Ase Marie;

    2011-01-01

    Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population....

  2. Quantifying lost information due to covariance matrix estimation in parameter inference

    CERN Document Server

    Sellentin, Elena

    2016-01-01

    Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing a Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit. We apply our results to DES Science Verification weak lensing data, detecting a 10% loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1%, with an additional unc...

  3. Tracking Biases: An Update to the Validity and Reliability of Alcohol Retail Sales Data for Estimating Population Consumption in Scotland

    OpenAIRE

    Henderson, Audrey; Robinson, Mark; McAdams, Rachel; McCartney, Gerry; Beeston, Clare

    2015-01-01

    Purchase of the sales data was funded by the Scottish Government as part of the wider Monitoring and Evaluating Scotland's Alcohol Strategy portfolio of studies. Funding to pay the Open Access publication charges for this article was provided by NHS Health Scotland. Aims: To highlight the importance of monitoring biases when using retail sales data to estimate population alcohol consumption. Methods: Previously, we identified and where possible quantified sources of bias that may lead to u...

  4. Periodic orbits of hybrid systems and parameter estimation via AD

    International Nuclear Information System (INIS)

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method (GM00, Phi03). Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  5. Periodic orbits of hybrid systems and parameter estimation via AD.

    Energy Technology Data Exchange (ETDEWEB)

    Guckenheimer, John. (Cornell University); Phipps, Eric Todd; Casey, Richard (INRIA Sophia-Antipolis)

    2004-07-01

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  6. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    Science.gov (United States)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced

  7. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  8. Trapping phenomenon of the parameter estimation in asymptotic quantum states

    Science.gov (United States)

    Berrada, K.

    2016-09-01

    In this paper, we study in detail the behavior of the precision of the parameter estimation in open quantum systems using the quantum Fisher information (QFI). In particular, we study the sensitivity of the estimation on a two-qubit system evolving under Kossakowski-type quantum dynamical semigroups of completely positive maps. In such an environment, the precision of the estimation can even persist asymptotically for different effects of the initial parameters. We find that the QFI can be resistant to the action of the environment with respect to the initial asymptotic states, and it can persist even in the asymptotic long-time regime. In addition, our results provide further evidence that the initial pure and separable mixed states of the input state may enhance quantum metrology. These features make quantum states in this kind of environment a good candidate for the implementation of different schemes of quantum optics and information with high precision. Finally, we show that this quantity may be proposed to detect the amount of the total quantum information that the whole state contains with respect to projective measurements.

  9. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  10. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  11. Matched-filtering and parameter estimation of ringdown waveforms

    CERN Document Server

    Berti, Emanuele; Cardoso, Vitor; Cavaglia, Marco

    2007-01-01

    Using recent results from numerical relativity simulations of non-spinning binary black hole mergers we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to about 1000 solar masses out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (> 10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than 10^6 templates would be needed for a single-stage multi-mode search. Therefore, we recommend a "two stage" search to save on computational costs: single-mode templates can be used for detection, but multi-mode templates or Prony methods should be use...

  12. Parameter Estimation of Induction Motors Using Water Cycle Optimization

    Directory of Open Access Journals (Sweden)

    M. Yazdani-Asrami

    2013-12-01

    Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams flow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.

  13. Spatial dependence clusters in the estimation of forest structural parameters

    Science.gov (United States)

    Wulder, Michael Albert

    1999-12-01

    In this thesis we provide a summary of the methods by which remote sensing may be applied in forestry, while also acknowledging the various limitations which are faced. The application of spatial statistics to high spatial resolution imagery is explored as a means of increasing the information which may be extracted from digital images. A number of high spatial resolution optical remote sensing satellites that are soon to be launched will increase the availability of imagery for the monitoring of forest structure. This technological advancement is timely as current forest management practices have been altered to reflect the need for sustainable ecosystem level management. The low accuracy level at which forest structural parameters have been estimated in the past is partly due to low image spatial resolution. A large pixel is often composed of a number of surface features, resulting in a spectral value which is due to the reflectance characteristics of all surface features within that pixel. In the case of small pixels, a portion of a surface feature may be represented by a single pixel. When a single pixel represents a portion of a surface object, the potential to isolate distinct surface features exists. Spatial statistics, such as the Gets statistic, provide for an image processing method to isolate distinct surface features. In this thesis, high spatial resolution imagery sensed over a forested landscape is processed with spatial statistics to combine distinct image objects into clusters, representing individual or groups of trees. Tree clusters are a means to deal with the inevitable foliage overlap which occurs within complex mixed and deciduous forest stands. The generation of image objects, that is, clusters, is necessary to deal with the presence of spectrally mixed pixels. The ability to estimate forest inventory and biophysical parameters from image clusters generated from spatially dependent image features is tested in this thesis. The inventory

  14. Parameters estimation and measurement of thermophysical properties of liquids

    Energy Technology Data Exchange (ETDEWEB)

    Remy, B.; Degiovanni, A. [Ecole Nationale Superieure et de Mecanique, Univ. Henri Poincare-Nancy 1, Inst. National Polytechnique de Lorraine, Vandoeuvre Les Nancy (France); Lab. d' Energetique et de Mecanique Theorique et Appliquee, Univ. Henri Poincare-Nancy 1, Inst. National Polytechnique de Lorraine, Vandoeuvre Les Nancy (France)

    2005-09-01

    The goal purchased in this paper is to implement an experimental bench allowing the measurement of the thermal diffusivity and conductivity of liquids. The principle of the measurement based on a pulsed method is presented. The entire problem is solved through the thermal quadrupoles method. Then, the parameters estimation problem that is specially difficult in this case due to the presence of the walls of the measurement cell is described and an optimal thickness for these walls is defined from a sensitivity study. Finally, we show how it is possible to take into account the radiative transfer within the fluid in the estimation problem, before presenting the set-up and some experimental results. (author)

  15. Estimating seismic demand parameters using the endurance time method

    Institute of Scientific and Technical Information of China (English)

    Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI

    2011-01-01

    The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.

  16. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  17. Energy parameter estimation in solar powered wireless sensor networks

    KAUST Repository

    Mousa, Mustafa

    2014-02-24

    The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms. © Springer International Publishing Switzerland 2014.

  18. Area-to-point parameter estimation with geographically weighted regression

    Science.gov (United States)

    Murakami, Daisuke; Tsutsumi, Morito

    2015-07-01

    The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.

  19. Observer based parallel IM speed and parameter estimation

    Directory of Open Access Journals (Sweden)

    Skoko Saša

    2014-01-01

    Full Text Available The detailed presentation of modern algorithm for the rotor speed estimation of an induction motor (IM is shown. The algorithm includes parallel speed and resistance parameter estimation and allows a robust shaft-sensorless operation in diverse conditions, including full load and low speed operation with a large thermal drift. The direct connection between the injected electric signal in the d-axis and the component of injected rotor flux were pointed at. The algorithm that has been applied in the paper uses the extracted component of the injected rotor flux in the d-axis from the observer state vector and filtrated measured electricity of one motor phase. By applying the mentioned algorithm, the system converges towards the given reference. [Projekat Ministarstva nauke Republike Srbije, br. III 42004

  20. Estimation of the reconstruction parameters for Atom Probe Tomography

    CERN Document Server

    Gault, Baptiste; Stephenson, Leigh T; Moody, Michael P; Muddle, Barry C; Ringer, Simon P

    2015-01-01

    The application of wide field-of-view detection systems to atom probe experiments emphasizes the importance of careful parameter selection in the tomographic reconstruction of the analysed volume, as the sensitivity to errors rises steeply with increases in analysis dimensions. In this paper, a self-consistent method is presented for the systematic determination of the main reconstruction parameters. In the proposed approach, the compression factor and the field factor are determined using geometrical projections from the desorption images. A 3D Fourier transform is then applied to a series of reconstructions and, comparing to the known material crystallography, the efficiency of the detector is estimated. The final results demonstrate a significant improvement in the accuracy of the reconstructed volumes.

  1. Pedotransfer functions estimating soil hydraulic properties using different soil parameters

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye;

    2008-01-01

    Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... of the hydraulic properties of the studied soils. We found that introducing measured water content as a predictor generally gave lower errors for water retention predictions and higher errors for conductivity predictions. The best of the developed PTFs for predicting hydraulic conductivity was tested against PTFs...

  2. Estimation of the empirical model parameters of unsaturated soils

    Directory of Open Access Journals (Sweden)

    Bouchemella Salima

    2016-01-01

    Full Text Available For each flow modelling in the unsaturated soils, it is necessary to determine the retention curve and the hydraulic conductivity curve of studied soils. Some empirical models use the same parameters to describe these two hydraulic properties. For this reason, the estimation of these parameters is achieved by adjusting the experimental points to the retention curve only, which is more easily measured as compared with the hydraulic conductivity curve. In this work, we show that the adjustment of the retention curve θ (h is not generally sufficient to describe the hydraulic conductivity curve K (θ and the spatio-temporal variation of the moisture in the soil θ (z. The models used in this study are van Genuchten- Mualem model (1980-1976 and Brooks and Corey model (1964, for two different soils; Gault clay and Givors silt.

  3. Enhancing the Precision of Parameter Estimation in Band Gap

    Science.gov (United States)

    Huang, J.; Zhan, Q.; Liu, Z. K.

    2016-09-01

    Recently, the dynamics of quantum Fisher information(QFI) in various environment are investigated and many kinds of schemes to overcome the drawback of decoherence are designed. Here we propose the pseudomode method to enhance the phase parameter precision of optimal quantum estimation of a qubit coupled to a non-Markovian structured environment. We find that the QFI can be enhanced in the weak-coupling regime with non-perfect band gap and can be trapped permanently with a large value in the perfect band gap. The effects of qubit-pseudomode detuning and the spectrum of reservoir are discussed, a reasonable physical explanation is given, too.

  4. The basel II risk parameters estimation, validation, and stress testing

    CERN Document Server

    Engelmann, Bernd

    2006-01-01

    In the last decade the banking industry has experienced a significant development in the understanding of credit risk. Refined methods were proposed concerning the estimation of key risk parameters like default probabilities. Further, a large v- ume of literature on the pricing and measurement of credit risk in a portfolio c- text has evolved. This development was partly reflected by supervisors when they agreed on the new revised capital adequacy framework, Basel II. Under Basel II, the level of regulatory capital depends on the risk characteristics of each credit while a portfolio context is

  5. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  6. Estimating bias from loss to follow-up in a prospective cohort study of bicycle crash injuries

    Science.gov (United States)

    Tin Tin, Sandar; Woodward, Alistair; Ameratunga, Shanthi

    2014-01-01

    Background Loss to follow-up, if related to exposures, confounders and outcomes of interest, may bias association estimates. We estimated the magnitude and direction of such bias in a prospective cohort study of crash injury among cyclists. Methods The Taupo Bicycle Study involved 2590 adult cyclists recruited from New Zealand's largest cycling event in 2006 and followed over a median period of 4.6 years through linkage to four administrative databases. We resurveyed the participants in 2009 and excluded three participants who died prior to the resurvey. We compared baseline characteristics and crash outcomes of the baseline (2006) and follow-up (those who responded in 2009) cohorts by ratios of relative frequencies and estimated potential bias from loss to follow-up on seven exposure-outcome associations of interest by ratios of HRs. Results Of the 2587 cyclists in the baseline cohort, 1526 (60%) responded to the follow-up survey. The responders were older, more educated and more socioeconomically advantaged. They were more experienced cyclists who often rode in a bunch, off-road or in the dark, but were less likely to engage in other risky cycling behaviours. Additionally, they experienced bicycle crashes more frequently during follow-up. The selection bias ranged between −10% and +9% for selected associations. Conclusions Loss to follow-up was differential by demographic, cycling and behavioural risk characteristics as well as crash outcomes, but did not substantially bias association estimates of primary research interest. PMID:24336816

  7. Estimation of multipath transmission parameters for quantitative ultrasound measurements of bone.

    Science.gov (United States)

    Dencks, Stefanie; Schmitz, Georg

    2013-09-01

    When applying quantitative ultrasound (QUS) measurements to bone for predicting osteoporotic fracture risk, the multipath transmission of sound waves frequently occurs. In the last 10 years, the interest in separating multipath QUS signals for their analysis awoke, and led to the introduction of several approaches. Here, we compare the performances of the two fastest algorithms proposed for QUS measurements of bone: the modified least-squares Prony method (MLSP), and the space alternating generalized expectation maximization algorithm (SAGE) applied in the frequency domain. In both approaches, the parameters of the transfer functions of the sound propagation paths are estimated. To provide an objective measure, we also analytically derive the Cramér-Rao lower bound of variances for any estimator and arbitrary transmit signals. In comparison with results of Monte Carlo simulations, this measure is used to evaluate both approaches regarding their accuracy and precision. Additionally, with simulations using typical QUS measurement settings, we illustrate the limitations of separating two superimposed waves for varying parameters with focus on their temporal separation. It is shown that for good SNRs around 100 dB, MLSP yields better results when two waves are very close. Additionally, the parameters of the smaller wave are more reliably estimated. If the SNR decreases, the parameter estimation with MLSP becomes biased and inefficient. Then, the robustness to noise of the SAGE clearly prevails. Because a clear influence of the interrelation between the wavelength of the ultrasound signals and their temporal separation is observable on the results, these findings can be transferred to QUS measurements at other sites. The choice of the suitable algorithm thus depends on the measurement conditions.

  8. Vmax estimate from three-parameter critical velocity models: validity and impact on 800 m running performance prediction.

    Science.gov (United States)

    Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc

    2006-05-01

    The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83parameter models had a better association (0.93bias (0.00<Bias<0.04 m s(-1)) with actual V800 m (5.87+/-0.49 m s(-1)) than two-parameter models (0.83Bias<0.20). If three-parameter models appear to have a better predictive value for short duration events such as the 800 m, the fact the Vmax is not associated with the ability it is supposed to reflect suggests that they are more empirical than systems models.

  9. Propensity score methods for estimating relative risks in cluster randomized trials with low-incidence binary outcomes and selection bias.

    Science.gov (United States)

    Leyrat, Clémence; Caille, Agnès; Donner, Allan; Giraudeau, Bruno

    2014-09-10

    Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)-based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. PMID:24771662

  10. The potential for regional-scale bias in top-down CO2 flux estimates due to atmospheric transport errors

    Directory of Open Access Journals (Sweden)

    S. M. Miller

    2014-09-01

    Full Text Available Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM–LETKF" to quantify two aspects of the transport errors: error variances (standard deviations and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation, depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological

  11. Estimation of Secondary Meteorological Parameters Using Mining Data Techniques

    Directory of Open Access Journals (Sweden)

    Rosabel Zerquera Díaz

    2010-10-01

    Full Text Available This work develops a process of Knowledge Discovery in Databases (KDD at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes the CRISP-DM methodology and data analysis tool WEKA. Tasks used: attribute selection and regression, technique: neural network of multilayer perceptron type and algorithms: CfsSubsetEval, BestFirst and MultilayerPerceptron. Estimation models are obtained for secondary meteorological parameters: height of convective mixed layer, height of mechanical mixed layer and convective velocity scale, necessary for the study of patterns of dispersion of pollutants in Cujae's area. The results set a precedent for future research and for the continuity of this in its first stage.

  12. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  13. Parameter estimation in space systems using recurrent neural networks

    Science.gov (United States)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  14. Estimating Friction Parameters in Reaction Wheels for Attitude Control

    Directory of Open Access Journals (Sweden)

    Valdemir Carrara

    2013-01-01

    Full Text Available The ever-increasing use of artificial satellites in both the study of terrestrial and space phenomena demands a search for increasingly accurate and reliable pointing systems. It is common nowadays to employ reaction wheels for attitude control that provide wide range of torque magnitude, high reliability, and little power consumption. However, the bearing friction causes the response of wheel to be nonlinear, which may compromise the stability and precision of the control system as a whole. This work presents a characterization of a typical reaction wheel of 0.65 Nms maximum angular momentum storage, in order to estimate their friction parameters. It used a friction model that takes into account the Coulomb friction, viscous friction, and static friction, according to the Stribeck formulation. The parameters were estimated by means of a nonlinear batch least squares procedure, from data raised experimentally. The results have shown wide agreement with the experimental data and were also close to a deterministic model, previously obtained for this wheel. This model was then employed in a Dynamic Model Compensator (DMC control, which successfully reduced the attitude steady state error of an instrumented one-axis air-bearing table.

  15. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik;

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...

  16. Bayesian Approach in Estimation of Scale Parameter of Nakagami Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-08-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Nakagami distribution is a flexible life time distribution that may offer a good fit to some failure data sets. It has applications in attenuation of wireless signals traversing multiple paths, deriving unit hydrographs in hydrology, medical imaging studies etc. In this research, we obtain Bayesian estimators of the scale parameter of Nakagami distribution. For the posterior distribution of this parameter, we consider Uniform, Inverse Exponential and Levy priors. The three loss functions taken up are Squared Error Loss function, Quadratic Loss Function and Precautionary Loss function. The performance of an estimator is assessed on the basis of its relative posterior risk. Monte Carlo Simulations are used to compare the performance of the estimators. It is discovered that the PLF produces the least posterior risk when uniform priors is used. SELF is the best when inverse exponential and Levy Priors are used. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

  17. On-line estimation of concentration parameters in fermentation processes

    Institute of Scientific and Technical Information of China (English)

    XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he

    2005-01-01

    It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.

  18. Estimation of Shower Parameters in Wavefront Sampling Technique

    CERN Document Server

    Chitnis, V R

    2001-01-01

    Wavefront sampling experiments record arrival times of \\v{C}erenkov photons with high precision at various locations in \\v{C}erenkov pool using a distributed array of telescopes. It was shown earlier that this photon front can be fitted with a spherical surface traveling at a speed of light and originating from a single point on the shower axis. Radius of curvature of the spherical shower front ($R$) is approximately equal to the height of shower maximum from observation level. For a given primary species, it is also found that $R$ varies with the primary energy ($E$) and this provides a method of estimating the primary energy. In general, one can estimate the arrival times at each telescope using the radius of curvature, arrival direction of the primary and the core location. This, when compared with the data enables us to estimate the above parameters for each shower. This method of obtaining the arrival direction alleviates the difficulty in the form of systematics arising out of the plane wavefront approx...

  19. A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.

    Science.gov (United States)

    Shoemaker, David M.

    Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

  20. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    Science.gov (United States)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  1. Parameter Estimations for Signal Type Classification of Korean Disordered Voices

    Directory of Open Access Journals (Sweden)

    JiYeoun Lee

    2015-12-01

    Full Text Available Although many signal-typing studies have been published, they are primarily based on manual inspection and experts’ judgments of voice samples’ acoustic content. Software may be required to automatically and objectively classify pathological voices into the four signal types and to facilitate experts’ opinion formation by providing specific signal type determination criteria. This paper suggests the coefficient of normalized skewness variation (CSV, coefficient of normalized kurtosis variation (CKV, and bicoherence value (BV based on the linear predictive coding (LPC residual to categorize voice signals. Its objective is to improve the performances of acoustic parameters such as jitter, shimmer, and the signal-to-noise ratio (SNR in signal type classification. In this study, the classification and regression tree (CART was used to estimate the performances of the acoustic, CSV, CKV, and BV parameters by using the LPC residual. In the investigation of acoustic parameters such as jitter, shimmer, and the SNR, the optimal tree generated by jitter alone yielded an average accuracy of 78.6%. When the acoustic, CSV, CKV, and BV parameters together were used to generate the decision tree, the average accuracy was 82.1%. In this case, the optimal tree formed by jitter and the BV effectively discriminated between the signal types. To perform accurate acoustic pathological voice analysis, signal type quantification is of great interest. Automatic pathological voice classification can be an important objective tool as the signal type can be numerically measured. Future investigations will incorporate multiple pathological data in classification methods to improve their performance and implement more reliable detectors.

  2. Estimation of the Alpha Factor Parameters Using the ICDE Database

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Hwang, M. J.; Han, S. H

    2007-04-15

    Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.

  3. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  4. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  5. Reducing the bias of estimates of genotype by environment interactions in random regression sire models.

    Science.gov (United States)

    Lillehammer, Marie; Odegård, Jørgen; Meuwissen, Theo H E

    2009-03-19

    The combination of a sire model and a random regression term describing genotype by environment interactions may lead to biased estimates of genetic variance components because of heterogeneous residual variance. In order to test different models, simulated data with genotype by environment interactions, and dairy cattle data assumed to contain such interactions, were analyzed. Two animal models were compared to four sire models. Models differed in their ability to handle heterogeneous variance from different sources. Including an individual effect with a (co)variance matrix restricted to three times the sire (co)variance matrix permitted the modeling of the additive genetic variance not covered by the sire effect. This made the ability of sire models to handle heterogeneous genetic variance approximately equivalent to that of animal models. When residual variance was heterogeneous, a different approach to account for the heterogeneity of variance was needed, for example when using dairy cattle data in order to prevent overestimation of genetic heterogeneity of variance. Including environmental classes can be used to account for heterogeneous residual variance.

  6. Multiphase flow parameter estimation based on laser scattering

    International Nuclear Information System (INIS)

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)

  7. Multiphase flow parameter estimation based on laser scattering

    Science.gov (United States)

    Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.

    2015-07-01

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.

  8. Multivariate phase type distributions - Applications and parameter estimation

    DEFF Research Database (Denmark)

    Meisch, David

    The best known univariate probability distribution is the normal distribution. It is used throughout the literature in a broad field of applications. In cases where it is not sensible to use the normal distribution alternative distributions are at hand and well understood, many of these belonging...... to the class of phase type distributions. Phase type distributions have several advantages. They are versatile in the sense that they can be used to approximate any given probability distribution on the positive reals. There exist general probabilistic results for the entire class of phase type distributions...... and statistical inference, is the multivariate normal distribution. Unfortunately only little is known about the general class of multivariate phase type distribution. Considering the results concerning parameter estimation and inference theory of univariate phase type distributions, the class of multivariate...

  9. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  10. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  11. Enhancing parameter precision of optimal quantum estimation by quantum screening

    Science.gov (United States)

    Jiang, Huang; You-Neng, Guo; Qin, Xie

    2016-02-01

    We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).

  12. MANOVA, LDA, and FA criteria in clusters parameter estimation

    Directory of Open Access Journals (Sweden)

    Stan Lipovetsky

    2015-12-01

    Full Text Available Multivariate analysis of variance (MANOVA and linear discriminant analysis (LDA apply such well-known criteria as the Wilks’ lambda, Lawley–Hotelling trace, and Pillai’s trace test for checking quality of the solutions. The current paper suggests using these criteria for building objectives for finding clusters parameters because optimizing such objectives corresponds to the best distinguishing between the clusters. Relation to Joreskog’s classification for factor analysis (FA techniques is also considered. The problem can be reduced to the multinomial parameterization, and solution can be found in a nonlinear optimization procedure which yields the estimates for the cluster centers and sizes. This approach for clustering works with data compressed into covariance matrix so can be especially useful for big data.

  13. The impact of response bias on estimates of health care utilization in a metropolitan area: The use of administrative data

    NARCIS (Netherlands)

    Reijneveld, S.A.; Stronks, K.

    1999-01-01

    Background. Surveys among the general population are an important method for collecting epidemiological data on health and utilization of health care in that population. Selective non-response may affect the validity of these data. This study examines the impact of response bias on estimates of heal

  14. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study

    DEFF Research Database (Denmark)

    Wood, L.; Egger, M.; Gluud, L.L.;

    2008-01-01

    OBJECTIVE: To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome. DESIGN: Combined analysis of data from three meta-epidemiological studies based o...

  15. Analysis of Wave Directional Spreading by Bayesian Parameter Estimation

    Institute of Scientific and Technical Information of China (English)

    钱桦; 莊士贤; 高家俊

    2002-01-01

    A spatial array of wave gauges installed on an observatoion platform has been designed and arranged to measure the lo-cal features of winter monsoon directional waves off Taishi coast of Taiwan. A new method, named the Bayesian ParameterEstimation Method( BPEM), is developed and adopted to determine the main direction and the directional spreading parame-ter of directional spectra. The BPEM could be considered as a regression analysis to find the maximum joint probability ofparameters, which best approximates the observed data from the Bayesian viewpoint. The result of the analysis of field wavedata demonstrates the highly dependency of the characteristics of normalized directional spreading on the wave age. The Mit-suyasu type empirical formula of directional spectnun is therefore modified to be representative of monsoon wave field. More-over, it is suggested that Smax could be expressed as a function of wave steepness. The values of Smax decrease with increas-ing steepness. Finally, a local directional spreading model, which is simple to be utilized in engineering practice, is prop-osed.

  16. Estimation of genetic parameters for reproductive traits in Shall sheep.

    Science.gov (United States)

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (PLSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep.

  17. Estimation of genetic parameters for reproductive traits in Shall sheep.

    Science.gov (United States)

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (PLSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep. PMID:23334381

  18. DriftLess™, an innovative method to estimate and compensate for the biases of inertial sensors

    NARCIS (Netherlands)

    Ruizenaar, M.G.H.; Kemp, R.A.W.

    2014-01-01

    In this paper a method is presented that allows for bias compensation of low-cost MEMS inertial sensors. It is based on the use of two sets of inertial sensors and a rotation mechanism that physically rotates the sensors in an alternating fashion. After signal processing, the biases of both sets of

  19. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer

    Science.gov (United States)

    Fetterly, Kenneth A.; Favazza, Christopher P.

    2016-08-01

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame-1 resulted in {{d}\\prime} estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}text{o}\\prime\\right) and non-stationary noise (d\\text{ns}\\prime ). Given the nature of the imaging system and the experimental methods, d\\text{o}\\prime cannot be directly determined independent of d\\text{ns}\\prime . However, methods to estimate d\\text{ns}\\prime independent of d\\text{o}\\prime were developed. In accordance with the theory, d\\text{ns}\\prime was subtracted from experimental estimates of dβ\\prime , providing an unbiased estimate of d\\text{o}\\prime . Estimates of d\\text{o}\\prime exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide d\\text{o}\\prime estimates which are accurate and precise for d\\text{o}\\prime~≥slant ˜ 1.0 . Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  20. Gross Error Detection and Identification Based on Parameter Estimation for Dynamic Systems%基于参数估计的动态系统过失误差侦破与识别

    Institute of Scientific and Technical Information of China (English)

    姜春阳; 邱彤; 赵劲松; 陈丙珍

    2009-01-01

    The detection and identification of gross errors, especially measurement bias, plays a vital role in data reconciliation for nonlinear dynamic systems. Although parameter estimation method has been proved to be a pow-erful tool for bias identification, without a reliable and efficient bias detection strategy, the method is limited in ef- ficiency and cannot be applied widely. In this paper, a new bias detection strategy is constructed to detect the pres-ence of measurement bias and its occurrence time. With the help of this strategy, the number of parameters to be es-timated is greatly reduced, and sequential detections and iterations arc also avoided. In addition, the number of de-cision variables of the optimization model is reduced, through which the influence of the parameters estimated is reduced. By incorporating the strategy into the parameter estimation model, a new methodology named IPF.BD (Improved Parameter Estimation method with Bias Detection strategy) is constructed. Simulation studies on a con-tinuous stirred tank reactor (CSTR) and the Tennessee Eastman (TE) problem show that IPEBD is efficient for eliminating random errors, measurement biases and outliers contained in dynamic process data.

  1. Influence of Discharge Parameters on Tuned Substrate Self-Bias in an Radio-Frequency Inductively Coupled Plasma

    Institute of Scientific and Technical Information of China (English)

    Ding Zhenfeng; Sun Jingchao; Wang Younian

    2005-01-01

    The tuned substrate self-bias in an rf inductively coupled plasma source is controlled by means of varying the impedance of an external LC network inserted between the substrate and the ground. The influencing parameters such as the substrate axial position, different coupling coils and inserted resistance are experimentally studied. To get a better understanding of the experimental results, the axial distributions of the plasma density, electron temperature and plasma potential are measured with an rf compensated Langmuir probe; the coil rf peak-to-peak voltage is measured with a high voltage probe. As in the case of changing discharge power, it is found that continuity, instability and bi-stability of the tuned substrate bias can be obtained by means of changing the substrate axial position in the plasma source or the inserted resistance. Additionally,continuity can not transit directly into bi-stability, but evolves via instability. The inductance of the coupling coil has a substantial effect on the magnitude and the property of the tuned substrate bias.

  2. ESTIMATION OF PARAMETERS IN STEP-STRESS ACCELERATED LIFE TESTS FOR THE RAYLEIGH DISTRIBUTION UNDER CENSORING SETUP

    Directory of Open Access Journals (Sweden)

    N. Chandra

    2014-12-01

    Full Text Available In this paper, step-stress accelerated life test strategy is considered in obtaining the failure time data of the highly reliable items or units or equipment in a specified period of time. It is assumed that life time data of such items follows a Rayleigh distribution with a scale parameter (θ which is the log linear function of the stress levels. The maximum likelihood estimates (MLEs of the scale parameters ( i θ at both the stress levels (s , i = 2,1 i are obtained under a cumulative exposure model. A simulation study is performed to assess the precision of the MLEs on the basis of mean square error (MSE and relative absolute bias (RABias. The coverage probabilities of approximate and bootstrap confidence intervals for the parameters involved under both the censoring setup are numerically examined. In addition to this, asymptotic variance and covariance matrix of the estimators are also presented.

  3. Weak-lensing shear estimates with general adaptive moments, and studies of bias by pixellation, PSF distortions, and noise

    CERN Document Server

    Simon, Patrick

    2016-01-01

    In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We employ general adaptive moments (GLAM) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. For GLAM, the ellipticity is identical to that of isophotes of elliptical images, and this ellipticity is always an unbiased estimator of reduced shear. Our theoretical framework reiterates that moment-based techniques are similar to a model-based approach in the sense that they fit an elliptical profile to the image to obtain weighted moments. As a result, moment-based estimates of ellipticities are prone to underfitting bias. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimized estimator for the pre-seeing GLAM ellipticity and its bias for noise-free images. To deal with images where pixel noise is prominent, we conside...

  4. Estimation of the refractive index structure parameter from single-level daytime routine weather data.

    Science.gov (United States)

    van de Boer, A; Moene, A F; Graf, A; Simmer, C; Holtslag, A A M

    2014-09-10

    Atmospheric scintillations cause difficulties for applications where an undistorted propagation of electromagnetic radiation is essential. These scintillations are related to turbulent fluctuations of temperature and humidity that are in turn related to surface heat fluxes. We developed an approach that quantifies these scintillations by estimating C(n(2)) from surface fluxes that are derived from single-level routine weather data. In contrast to previous methods that are biased to dry and warm air, our method is directly applicable to several land surface types, environmental conditions, wavelengths, and measurement heights (lookup tables for a limited number of site-specific parameters are provided). The approach allows for an efficient evaluation of the performance of, e.g., infrared imaging systems, laser geodetic systems, and ground-to-satellite optical communication systems. We tested our approach for two grass fields in central and southern Europe, and for a wheat field in central Europe. Although there are uncertainties in the flux estimates, the impact on C(n(2)) is shown to be rather small. The C(n(2)) daytime estimates agree well with values determined from eddy covariance measurements for the application to the three fields. However, some adjustments were needed for the approach for the grass in southern Europe because of non-negligible boundary-layer processes that occur in addition to surface-layer processes.

  5. [Base-rate estimates for negative response bias in a workers' compensation claim sample].

    Science.gov (United States)

    Merten, T; Krahi, G; Krahl, C; Freytag, H W

    2010-09-01

    Against the background of a growing interest in symptom validity assessment in European countries, new data on base rates of negative response bias is presented. A retrospective data analysis of forensic psychological evaluations was performed based on 398 patients with workers' compensation claims. 48 percent of all patients scored below cut-off in at least one symptom validity test (SVT) indicating possible negative response bias. However, different SVTs appear to have differing potential to identify negative response bias. The data point at the necessity to use modern methods to check data validity in civil forensic contexts.

  6. Clinical refinement of the automatic lung parameter estimator (ALPE).

    Science.gov (United States)

    Thomsen, Lars P; Karbing, Dan S; Smith, Bram W; Murley, David; Weinreich, Ulla M; Kjærgaard, Søren; Toft, Egon; Thorgaard, Per; Andreassen, Steen; Rees, Stephen E

    2013-06-01

    The automatic lung parameter estimator (ALPE) method was developed in 2002 for bedside estimation of pulmonary gas exchange using step changes in inspired oxygen fraction (FIO₂). Since then a number of studies have been conducted indicating the potential for clinical application and necessitating systems evolution to match clinical application. This paper describes and evaluates the evolution of the ALPE method from a research implementation (ALPE1) to two commercial implementations (ALPE2 and ALPE3). A need for dedicated implementations of the ALPE method was identified: one for spontaneously breathing (non-mechanically ventilated) patients (ALPE2) and one for mechanically ventilated patients (ALPE3). For these two implementations, design issues relating to usability and automation are described including the mixing of gasses to achieve FIO₂ levels, and the automatic selection of FIO₂. For ALPE2, these improvements are evaluated against patients studied using the system. The major result is the evolution of the ALPE method into two dedicated implementations, namely ALPE2 and ALPE3. For ALPE2, the usability and automation of FIO₂ selection has been evaluated in spontaneously breathing patients showing that variability of gas delivery is 0.3 % (standard deviation) in 1,332 breaths from 20 patients. Also for ALPE2, the automated FIO2 selection method was successfully applied in 287 patient cases, taking 7.2 ± 2.4 min and was shown to be safe with only one patient having SpO₂ < 86 % when the clinician disabled the alarms. The ALPE method has evolved into two practical, usable systems targeted at clinical application, namely ALPE2 for spontaneously breathing patients and ALPE3 for mechanically ventilated patients. These systems may promote the exploration of the use of more detailed descriptions of pulmonary gas exchange in clinical practice.

  7. Noise-bias compensation in physical-parameter system identification under microtremor input

    OpenAIRE

    Yoshitomi, S.; Takewaki, Izuru

    2009-01-01

    A direct method of physical-parameter system identification (SI) is developed in the case of containing noises at both floors above and below a specified story. To investigate the effect of the level of noise on the accuracy of identification, numerical simulations are performed in the frequency domain by generating two stationary random processes with the specified levels of power spectra. When the previous method of physical-parameter SI is applied to the case contaminated by noise at both ...

  8. Propagation of biases in humidity in the estimation of global irrigation water

    Directory of Open Access Journals (Sweden)

    Y. Masaki

    2015-07-01

    Although different GHMs have different sensitivities to atmospheric humidity because different types of potential evapotranspiration formulae are implemented in them, bias correction of the humidity should be applied to forcing data, particularly for the evaluation of evapotranspiration and irrigation water.

  9. Analysis of burnup credit on spent fuel transport / storage casks - estimation of reactivity bias

    International Nuclear Information System (INIS)

    Chemical analyses of high burnup UO2 (65 GWd/t) and MOX (45 GWd/t) spent fuel pins were carried out. Measured data of nuclides' composition from U234 to P 242 were used for evaluation of ORIGEN-2/82 code and a nuclear fuel design code (NULIF). Critically calculations were executed for transport and storage casks for 52 BWR or 21 PWR spent fuel assemblies. The reactivity biases were evaluated for axial and horizontal profiles of burnup, and historical void fraction (BWR), operational histories such as control rod insertion history, BPR insertion history and others, and calculational accuracy of ORIGEN-2/82 on nuclides' composition. This study shows that introduction of burnup credit has a large merit in criticality safety analysis of casks, even if these reactivity biases are considered. The concept of equivalent uniform burnup was adapted for the present reactivity bias evaluation and showed the possibility of simplifying the reactivity bias evaluation in burnup credit. (authors)

  10. Anaerobic biodegradability of fish remains: experimental investigation and parameter estimation.

    Science.gov (United States)

    Donoso-Bravo, Andres; Bindels, Francoise; Gerin, Patrick A; Vande Wouwer, Alain

    2015-01-01

    The generation of organic waste associated with aquaculture fish processing has increased significantly in recent decades. The objective of this study is to evaluate the anaerobic biodegradability of several fish processing fractions, as well as water treatment sludge, for tilapia and sturgeon species cultured in recirculated aquaculture systems. After substrate characterization, the ultimate biodegradability and the hydrolytic rate were estimated by fitting a first-order kinetic model with the biogas production profiles. In general, the first-order model was able to reproduce the biogas profiles properly with a high correlation coefficient. In the case of tilapia, the skin/fin, viscera, head and flesh presented a high level of biodegradability, above 310 mLCH₄gCOD⁻¹, whereas the head and bones showed a low hydrolytic rate. For sturgeon, the results for all fractions were quite similar in terms of both parameters, although viscera presented the lowest values. Both the substrate characterization and the kinetic analysis of the anaerobic degradation may be used as design criteria for implementing anaerobic digestion in a recirculating aquaculture system. PMID:25812103

  11. Estimation of parameters of K-meson structure functions

    International Nuclear Information System (INIS)

    On the basis of multiparton recombination model with the use of the Kuti-Weisskopf parametrization there have been analyzed the available experimental data on inclusive spectra of the vector and tensor mesons in the reactions K±p → MX (M=ρ, φ, K(890), K(1430) in the kaon fragmentation region at high energies (32-110 GeV/c) with the aim to extract the parameters of the K-meson structure functions. For the suppression factor of the kaon strange sea the value λs=0.18±0.01 is obtained. The kaon longitudinal momentum fraction carried away by nonstrange valence quarks and sea partons respectively are NV>=0.17, SV>=0.30 and S>=0.53. Estimates are obtained for the summary longitudinal momentum fractions carried away by nonstrange sea quark-antiquark pairs NS>=0.23±0.06, strange sea quark-antiquark pairs SS>=0.02±0.01 and gluons G>=0.28±0.09. 26 refs.; 4 figs.; 1 tab

  12. Modeling and parameter estimation for hydraulic system of excavator's arm

    Institute of Scientific and Technical Information of China (English)

    HE Qing-hua; HAO Peng; ZHANG Da-qing

    2008-01-01

    A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.

  13. Estimating the effect of nonresponse bias in a survey of hospital organizations.

    Science.gov (United States)

    Lewis, Emily F; Hardy, Maryann; Snaith, Beverly

    2013-09-01

    Nonresponse bias in survey research can result in misleading or inaccurate findings and assessment of nonresponse bias is advocated to determine response sample representativeness. Four methods of assessing nonresponse bias (analysis of known characteristics of a population, subsampling of nonresponders, wave analysis, and linear extrapolation) were applied to the results of a postal survey of U.K. hospital organizations. The purpose was to establish whether validated methods for assessing nonresponse bias at the individual level can be successfully applied to an organizational level survey. The aim of the initial survey was to investigate trends in the implementation of radiographer abnormality detection schemes, and a response rate of 63.7% (325/510) was achieved. This study identified conflicting trends in the outcomes of analysis of nonresponse bias between the different methods applied and we were unable to validate the continuum of resistance theory as applied to organizational survey data. Further work is required to ensure established nonresponse bias analysis approaches can be successfully applied to organizational survey data. Until then, it is suggested that a combination of methods should be used to enhance the rigor of survey analysis. PMID:23908382

  14. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool

    OpenAIRE

    Zi, Z.; Klipp, E.

    2006-01-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of...

  15. Estimation of uranium migration parameters in sandstone aquifers.

    Science.gov (United States)

    Malov, A I

    2016-03-01

    The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

  16. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer

    Science.gov (United States)

    Fetterly, Kenneth A.; Favazza, Christopher P.

    2016-08-01

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5–4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6–240 nGy frame‑1 resulted in {{d}\\prime} estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  17. Automated Modal Parameter Estimation of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, Palle; Brincker, Rune; Goursat, Maurice;

    In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...

  18. On Parameters Estimation of Lomax Distribution under General Progressive Censoring

    Directory of Open Access Journals (Sweden)

    Bander Al-Zahrani

    2013-01-01

    Full Text Available We consider the estimation problem of the probability S=P(Yestimator and Bayes estimators are obtained using the symmetric and asymmetric balanced loss functions. The Markov chain Monte Carlo (MCMC methods are used to accomplish some complex calculations. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation study.

  19. Variational methods to estimate terrestrial ecosystem model parameters

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  20. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    Science.gov (United States)

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  1. Parameter estimation and determinability analysis applied to Drosophila gap gene circuits

    NARCIS (Netherlands)

    Ashyraliyev, M.; Jaeger, J.; Blom, J.G.

    2008-01-01

    Background

    Mathematical modeling of real-life processes often requires the estimation of unknown parameters. Once the parameters are found by means of optimization, it is important to assess the quality of the parameter estimates, especially if parameter values are used to draw biological c

  2. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the paramete...

  3. A probability model for evaluating the bias and precision of influenza vaccine effectiveness estimates from case-control studies.

    Science.gov (United States)

    Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A

    2015-05-01

    As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.

  4. Improving documentation and coding for acute organ dysfunction biases estimates of changing sepsis severity and burden: a retrospective study

    OpenAIRE

    Rhee, Chanu; Murphy, Michael V.; Li, Lingling; Platt, Richard; Klompas, Michael; ,

    2015-01-01

    Introduction Claims-based analyses report that the incidence of sepsis-associated organ dysfunction is increasing. We examined whether coding practices for acute organ dysfunction are changing over time and if so, whether this is biasing estimates of rising severe sepsis incidence and severity. Methods We assessed trends from 2005 to 2013 in the annual sensitivity and incidence of discharge ICD-9-CM codes for organ dysfunction (shock, respiratory failure, acute kidney failure, acidosis, hepat...

  5. Improving documentation and coding for acute organ dysfunction biases estimates of changing sepsis severity and burden: a retrospective study

    OpenAIRE

    Rhee, Chanu; Murphy, Michael V.; Li, Lingling; Platt, Richard; Klompas, Michael

    2015-01-01

    Introduction: Claims-based analyses report that the incidence of sepsis-associated organ dysfunction is increasing. We examined whether coding practices for acute organ dysfunction are changing over time and if so, whether this is biasing estimates of rising severe sepsis incidence and severity. Methods: We assessed trends from 2005 to 2013 in the annual sensitivity and incidence of discharge ICD-9-CM codes for organ dysfunction (shock, respiratory failure, acute kidney failure, acidosis, hep...

  6. A bootstrap method for estimating bias and variance in statistical fisheries modelling frameworks using highly disparate datasets

    OpenAIRE

    Elvarsson, B. P.; Taylor, L.; Trenkel, Verena; Kupca, V.; Stefansson, G.

    2014-01-01

    Statistical models of marine ecosystems use a variety of data sources to estimate parameters using composite or weighted likelihood functions with associated weighting issues and questions on how to obtain variance estimates. Regardless of the method used to obtain point estimates, a method is required for variance estimation. A bootstrap technique is introduced for the evaluation of uncertainty in such models, taking into account inherent spatial and temporal correlations in the datasets, wh...

  7. Improving the global SST record: estimates of biases from engine room intake SST using high quality satellite data

    Science.gov (United States)

    Carella, Giulia; Kent, Elizabeth C.; Berry, David I.; Morak-Bozzo, Simone; Merchant, Christopher J.

    2016-04-01

    Sea Surface Temperature (SST) is the marine component of the global surface temperature record, a primary metric of climate change. SST observations from ships form one of the longest instrumental records of surface marine climate. However, over the years several different methods of measuring SST have been used, each with different bias characteristics. The estimation of systematic biases in the SST record is critical for climatic decadal predictions, and uncertainties in long-term trends are expected to be dominated by uncertainties in biases introduced by changes of instrumentation and measurement practices. Although the largest systematic errors in SST observations relate to the period before about 1940, where SST measurements were mostly made using buckets, there are also issues with modern data, in particular when the SST reported is the temperature of the engine-room cooling water intake (ERI). Physical models for biases in ERI SSTs have not been developed as the details of the individual setup on each ship are extremely important, and almost always unknown. Existing studies estimate that the typical ERI biases are around 0.2°C and most estimates of the mean bias fall between 0.1°C and 0.3°C, but there is some evidence of much larger differences. However, these analyses provide only broad estimates, being based only on subsamples of the data and ignoring ship-by-ship differences. Here we take advantage of a new, high spatial resolution, gap-filled, daily SST for the period 1992-2010 from the European Space Agency Climate Change Initiative (ESA CCI) for SST dataset version 1.1. In this study, we use a Bayesian statistical model to characterise the uncertainty in reports of ERI SST for individual ships using the ESA CCI SST as a reference. A Bayesian spatial analysis is used to model the differences of the observed SST from the ESA CCI SST for each ship as a constant offset plus a function of the climatological SST. This was found to be an important term

  8. A general method of estimating stellar astrophysical parameters from photometry

    CERN Document Server

    Belikov, A N

    2008-01-01

    Applying photometric catalogs to the study of the population of the Galaxy is obscured by the impossibility to map directly photometric colors into astrophysical parameters. Most of all-sky catalogs like ASCC or 2MASS are based upon broad-band photometric systems, and the use of broad photometric bands complicates the determination of the astrophysical parameters for individual stars. This paper presents an algorithm for determining stellar astrophysical parameters (effective temperature, gravity and metallicity) from broad-band photometry even in the presence of interstellar reddening. This method suits the combination of narrow bands as well. We applied the method of interval-cluster analysis to finding stellar astrophysical parameters based on the newest Kurucz models calibrated with the use of a compiled catalog of stellar parameters. Our new method of determining astrophysical parameters allows all possible solutions to be located in the effective temperature-gravity-metallicity space for the star and se...

  9. Effect of indium low doping in ZnO based TFTs on electrical parameters and bias stress stability

    Energy Technology Data Exchange (ETDEWEB)

    Cheremisin, Alexander B., E-mail: acher612@gmail.com; Kuznetsov, Sergey N.; Stefanovich, Genrikh B. [Physico-Technical Department, Petrozavodsk State University, Petrozavodsk 185910 (Russian Federation)

    2015-11-15

    Some applications of thin film transistors (TFTs) need the bottom-gate architecture and unpassivated channel backside. We propose a simple routine to fabricate indium doped ZnO-based TFT with satisfactory characteristics and acceptable stability against a bias stress in ambient room air. To this end, a channel layer of 15 nm in thickness was deposited on cold substrate by DC reactive magnetron co-sputtering of metal Zn-In target. It is demonstrated that the increase of In concentration in ZnO matrix up to 5% leads to negative threshold voltage (V{sub T}) shift and an increase of field effect mobility (μ) and a decrease of subthreshold swing (SS). When dopant concentration reaches the upper level of 5% the best TFT parameters are achieved such as V{sub T} = 3.6 V, μ = 15.2 cm{sup 2}/V s, SS = 0.5 V/dec. The TFTs operate in enhancement mode exhibiting high turn on/turn off current ratio more than 10{sup 6}. It is shown that the oxidative post-fabrication annealing at 250{sup o}C in pure oxygen and next ageing in dry air for several hours provide highly stable operational characteristics under negative and positive bias stresses despite open channel backside. A possible cause of this effect is discussed.

  10. Estimating atmospheric parameters and reducing noise for multispectral imaging

    Science.gov (United States)

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  11. The influence of contrasting suspended particulate matter transport regimes on the bias and precision of flux estimates.

    Science.gov (United States)

    Moatar, Florentina; Person, Gwenaelle; Meybeck, Michel; Coynel, Alexandra; Etcheber, Henri; Crouzet, Philippe

    2006-11-01

    A large database (507 station-years) of daily suspended particulate matter (SPM) concentration and discharge data from 36 stations on river basins ranging from 600 km(2) to 600,000 km(2) in size (USA and Europe) was collected to assess the effects of SPM transport regime on bias and imprecision of flux estimates when using infrequent surveys and the discharge-weighted mean concentration method. By extracting individual SPM concentrations and corresponding discharge values from the database, sampling frequencies from 12 to 200 per year were simulated using Monte Carlo techniques. The resulting estimates of yearly SPM fluxes were compared to reference fluxes derived from the complete database. For each station and given frequency, bias was measured by the median of relative errors between estimated and reference fluxes, and imprecision by the difference between the upper and lower deciles of relative errors. Results show that the SPM transport regime of rivers affects the bias and imprecision of fluxes estimated by the discharge-weighted mean concentration method for given sampling frequencies (e.g. weekly, bimonthly, monthly). The percentage of annual SPM flux discharged in 2% of time (Ms(2)) is a robust indicator of SPM transport regime directly related to bias and imprecision. These errors are linked to the Ms(2) indicator for various sampling frequencies within a specific nomograph. For instance, based on a deviation of simulated flux estimates from reference fluxes lower than +/-20% and a bias lower than 1% or 2%, the required sampling intervals are less than 3 days for rivers with Ms(2) greater than 40% (basin size<10,000 km(2)), between 3 and 5 days for rivers with Ms(2) between 30 and 40% (basin size between 10,000 and 50,000 km(2)), between 5 and 12 days for Ms(2) from 20% to 30% (basin size between 50,000 and 200,000 km(2)), 12-20 days for Ms(2) in the 15-20% range (basin size between 200,000 and 500,000 km(2)). PMID:16949650

  12. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    Directory of Open Access Journals (Sweden)

    Shuguo Pan

    2015-07-01

    Full Text Available Satellite orbit error and clock bias are the keys to precise point positioning (PPP. The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS, a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can

  13. Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

    Directory of Open Access Journals (Sweden)

    Xiu Kan

    2012-01-01

    Full Text Available The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter θ:dXt=(θα(t+β(tXtdt+σ(tdWt. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter θ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

  14. Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

    OpenAIRE

    Xiu Kan; Huisheng Shu; Yan Che

    2012-01-01

    The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter θ:dXt=(θα(t)+β(t)Xt)dt+σ(t)dWt. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter θ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

  15. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    OpenAIRE

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2013-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afford...

  16. Robust Speed and Parameter Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.; Vadstrup, P.

    1995-01-01

    This paper presents a Model Reference Adaptive System (MRAS) for the estimation of the induction motor speed, based on measured terminal voltages and currents.......This paper presents a Model Reference Adaptive System (MRAS) for the estimation of the induction motor speed, based on measured terminal voltages and currents....

  17. Improved Parameter Estimation for First-Order Markov Process

    Directory of Open Access Journals (Sweden)

    Deepak Batra

    2009-01-01

    Full Text Available This correspondence presents a linear transformation, which is used to estimate correlation coefficient of first-order Markov process. It outperforms zero-forcing (ZF, minimum mean-squared error (MMSE, and whitened least-squares (WTLSs estimators by controlling output noise variance at the cost of increased computational complexity.

  18. Moving Ship SAR Imaging Based on Parameter Estimation

    OpenAIRE

    Yun Yajiao; Qi Xiangyang; Li Ning

    2016-01-01

    The Doppler parameters of moving targets affect the conventional Synthetic Aperture Radar (SAR) imaging. In this study, the relation between the motion and Doppler parameters is established. With improved popular technology, a set of moving ship SAR imaging processes is proposed to obtain a focused and rightlocated image. Simulations and experimental data are used to verify the method.

  19. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders

    In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...

  20. Single-Channel Blind Estimation of Reverberation Parameters

    DEFF Research Database (Denmark)

    Doire, C.S.J.; Brookes, M. D.; Naylor, P. A.;

    2015-01-01

    The reverberation of an acoustic channel can be characterised by two frequency-dependent parameters: the reverberation time and the direct-to-reverberant energy ratio. This paper presents an algorithm for blindly determining these parameters from a single-channel speech signal. The algorithm uses...

  1. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  2. Astrophysical Prior Information and Gravitational-wave Parameter Estimation

    CERN Document Server

    Pankow, Chris; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki

    2016-01-01

    The detection of electromagnetic counterparts to gravitational waves has great promise for the investigation of many scientific questions. It has long been hoped that in addition to providing extra, non-gravitational information about the sources of these signals, the detection of an electromagnetic signal in conjunction with a gravitational wave could aid in the analysis of the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters, such as mass and spin, that are intrinsic to the binary. In this paper, we investigate this issue by assuming a perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. However, we find only modest improvements in a few parameters --- namely the primary component's spin --- and conclude that, even in the best case, the use of additional ...

  3. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  4. Examination of the Parameter Estimate Bias When Violating the Orthogonality Assumption of the Bifactor Model

    Science.gov (United States)

    Zheng, Chunmei

    2013-01-01

    Educational and psychological constructs are normally measured by multifaceted dimensions. The measured construct is defined and measured by a set of related subdomains. A bifactor model can accurately describe such data with both the measured construct and the related subdomains. However, a limitation of the bifactor model is the orthogonality…

  5. Biased Parameter Estimation for LDA%LDA模型参数有偏估计方法

    Institute of Scientific and Technical Information of China (English)

    袁伯秋; 周一民; 李林

    2010-01-01

    LDA(Latent Dirichlet Allocation)等基于隐含topic的模型在离散数据处理中的应用逐渐增多.然而LDA使用Dirichlet分布作为隐含topic的分布函数,未能很好表示各topic之间相互关系.目前常见改进方法是通过DAG(Directed Acyclic Graph)图或对数正态分布等其他分布函数表达topic之间的关系.本文通过参数有偏估计的方法,考虑topic混合过程中词项上的重叠关系,改变topic内部词项分布,最终改进LDA模型性能.在回顾一些基础内容后,重点介绍参数有偏估计及简化计算方法.最后通过LDA模型在信息检索中的实验验证这种改进的有效性,并初步分析模型参数选用规律.

  6. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Science.gov (United States)

    Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul

    2015-01-01

    Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals. PMID:25909636

  7. Nearly best linear estimates of logistic parameters based on complete ordered statistics

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Deals with the determination of the nearly best linear estimates of location and scale parameters of a logistic population, when both parameters are unknown, by introducing Bloms semi-empirical α, β-correction′into the asymptotic mean and covariance formulae with complete and ordered samples taken into consideration and various nearly best linear estimates established and points out the high efficiency of these estimators relative to the best linear unbiased estimators (BLUEs) and other linear estimators makes them useful in practice.

  8. The pulse-pair algorithm as a robust estimator of turbulent weather spectral parameters using airborne pulse Doppler radar

    Science.gov (United States)

    Baxa, Ernest G., Jr.; Lee, Jonggil

    1991-01-01

    The pulse pair method for spectrum parameter estimation is commonly used in pulse Doppler weather radar signal processing since it is economical to implement and can be shown to be a maximum likelihood estimator. With the use of airborne weather radar for windshear detection, the turbulent weather and strong ground clutter return spectrum differs from that assumed in its derivation, so the performance robustness of the pulse pair technique must be understood. Here, the effect of radar system pulse to pulse phase jitter and signal spectrum skew on the pulse pair algorithm performance is discussed. Phase jitter effect may be significant when the weather return signal to clutter ratio is very low and clutter rejection filtering is attempted. The analysis can be used to develop design specifications for airborne radar system phase stability. It is also shown that the weather return spectrum skew can cause a significant bias in the pulse pair mean windspeed estimates, and that the poly pulse pair algorithm can reduce this bias. It is suggested that use of a spectrum mode estimator may be more appropriate in characterizing the windspeed within a radar range resolution cell for detection of hazardous windspeed gradients.

  9. 线性模型参数的稳健化有偏估计%Robustifying Biased Estimation in Linear Model

    Institute of Scientific and Technical Information of China (English)

    段清堂; 归庆明

    2000-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously.A class of new estimators,robust general shrunken estimators,are proposed by grafting the robust estimation techniques philosophy into the biased estimator,and their statistical properties are discussed.By appropriate choices of the shrinking parameter matrix,we obtain many useful and important estimators.A numerical example is used to illustrate that these new estimators can not only effectively overcome difficulty caused by multicollinearity but also resist the influence of outliers.%本文讨论复共线性和粗差同时存在时线性模型的参数估计问题.基于等价权原理提出了一个稳健有偏估计类(稳健压缩估计),并且建立了稳健压缩估计的计算方法.为了满足实际问题的需要,构造了许多很有意义的稳健有偏估计,例如稳健岭估计、稳健主成分估计、稳健组合主成分估计、稳健单参数主成分估计、稳健根方估计等等.最后通过一个算例表明,本文提出的稳健有偏估计具有既可克服复共线性影响又可抵抗粗差干扰的良好性质.

  10. Estimation of poroelastic parameters from seismograms using Biot theory

    CERN Document Server

    De Barros, Louis

    2010-01-01

    We investigate the possibility to extract information contained in seismic waveforms propagating in fluid-filled porous media by developing and using a full waveform inversion procedure valid for layered structures. To reach this objective, we first solve the forward problem by implementing the Biot theory in a reflectivity-type simulation program. We then study the sensitivity of the seismic response of stratified media to the poroelastic parameters. Our numerical tests indicate that the porosity and consolidation parameter are the most sensitive parameters in forward and inverse modeling, whereas the permeability has only a very limited influence on the seismic response. Next, the analytical expressions of the sensitivity operators are introduced in a generalized least-square inversion algorithm based on an iterative modeling of the seismic waveforms. The application of this inversion procedure to synthetic data shows that the porosity as well as the fluid and solid parameters can be correctly reconstructed...

  11. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  12. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    OpenAIRE

    Zhang Zhang; Li Jun; Cui Peng; Ding Feng; Li Ang; Townsend Jeffrey P; Yu Jun

    2012-01-01

    Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not...

  13. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity.

    Directory of Open Access Journals (Sweden)

    Nicolas Heslot

    Full Text Available Genome-wide molecular markers are often being used to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorphisms in the population under study. Ascertainment bias arises when marker data is not obtained from a random sample of the polymorphisms in the population of interest. Genotyping-by-sequencing (GBS is rapidly emerging as a low-cost genotyping platform, even for the large, complex, and polyploid wheat (Triticum aestivum L. genome. With GBS, marker discovery and genotyping occur simultaneously, resulting in minimal ascertainment bias. The previous platform of choice for whole-genome genotyping in many species such as wheat was DArT (Diversity Array Technology and has formed the basis of most of our knowledge about cereals genetic diversity. This study compared GBS and DArT marker platforms for measuring genetic diversity and genomic selection (GS accuracy in elite U.S. soft winter wheat. From a set of 365 breeding lines, 38,412 single nucleotide polymorphism GBS markers were discovered and genotyped. The GBS SNPs gave a higher GS accuracy than 1,544 DArT markers on the same lines, despite 43.9% missing data. Using a bootstrap approach, we observed significantly more clustering of markers and ascertainment bias with DArT relative to GBS. The minor allele frequency distribution of GBS markers had a deficit of rare variants compared to DArT markers. Despite the ascertainment bias of the DArT markers, GS accuracy for three traits out of four was not significantly different when an equal number of markers were used for each platform. This suggests that the gain in accuracy observed using GBS compared to DArT markers was mainly due to a large increase in the number of markers available for the analysis.

  14. Bayesian Shrinkage Estimation of Quantitative Trait Loci Parameters

    OpenAIRE

    Wang, Hui; Zhang, Yuan-Ming; Li, Xinmin; Masinde, Godfred L.; Mohan, Subburaman; Baylink, David J.; Xu, Shizhong

    2005-01-01

    Mapping multiple QTL is a typical problem of variable selection in an oversaturated model because the potential number of QTL can be substantially larger than the sample size. Currently, model selection is still the most effective approach to mapping multiple QTL, although further research is needed. An alternative approach to analyzing an oversaturated model is the shrinkage estimation in which all candidate variables are included in the model but their estimated effects are forced to shrink...

  15. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    Science.gov (United States)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  16. Retrospective forecast of ETAS model with daily parameters estimate

    Science.gov (United States)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  17. Groundtruthing next-gen sequencing for microbial ecology-biases and errors in community structure estimates from PCR amplicon pyrosequencing.

    Directory of Open Access Journals (Sweden)

    Charles K Lee

    Full Text Available Analysis of microbial communities by high-throughput pyrosequencing of SSU rRNA gene PCR amplicons has transformed microbial ecology research and led to the observation that many communities contain a diverse assortment of rare taxa-a phenomenon termed the Rare Biosphere. Multiple studies have investigated the effect of pyrosequencing read quality on operational taxonomic unit (OTU richness for contrived communities, yet there is limited information on the fidelity of community structure estimates obtained through this approach. Given that PCR biases are widely recognized, and further unknown biases may arise from the sequencing process itself, a priori assumptions about the neutrality of the data generation process are at best unvalidated. Furthermore, post-sequencing quality control algorithms have not been explicitly evaluated for the accuracy of recovered representative sequences and its impact on downstream analyses, reducing useful discussion on pyrosequencing reads to their diversity and abundances. Here we report on community structures and sequences recovered for in vitro-simulated communities consisting of twenty 16S rRNA gene clones tiered at known proportions. PCR amplicon libraries of the V3-V4 and V6 hypervariable regions from the in vitro-simulated communities were sequenced using the Roche 454 GS FLX Titanium platform. Commonly used quality control protocols resulted in the formation of OTUs with >1% abundance composed entirely of erroneous sequences, while over-aggressive clustering approaches obfuscated real, expected OTUs. The pyrosequencing process itself did not appear to impose significant biases on overall community structure estimates, although the detection limit for rare taxa may be affected by PCR amplicon size and quality control approach employed. Meanwhile, PCR biases associated with the initial amplicon generation may impose greater distortions in the observed community structure.

  18. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    Science.gov (United States)

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  19. Estimated genetic parameters for carcass traits of Brahman cattle.

    Science.gov (United States)

    Riley, D G; Chase, C C; Hammond, A C; West, R L; Johnson, D D; Olson, T A; Coleman, S W

    2002-04-01

    Heritabilities and genetic and phenotypic correlations were estimated from feedlot and carcass data collected from Brahman calves (n = 504) in central Florida from 1996 to 2000. Data were analyzed using animal models in MTDFREML. Models included contemporary group (n = 44; groups of calves of the same sex, fed in the same pen, slaughtered on the same day) as a fixed effect and calf age in days at slaughter as a continuous variable. Estimated feedlot trait heritabilities were 0.64, 0.67, 0.47, and 0.26 for ADG, hip height at slaughter, slaughter weight, and shrink. The USDA yield grade estimated heritability was 0.71; heritabilities for component traits of yield grade, including hot carcass weight, adjusted 12th rib backfat thickness, loin muscle area, and percentage kidney, pelvic, and heart fat were 0.55, 0.63, 0.44, and 0.46, respectively. Heritability estimates for dressing percentage, marbling score, USDA quality grade, cutability, retail yield, and carcass hump height were 0.77, 0.44, 0.47, 0.71, 0.5, and 0.54, respectively. Estimated genetic correlations of adjusted 12th rib backfat thickness with ADG, slaughter weight, marbling score, percentage kidney, pelvic, and heart fat, and yield grade (0.49, 0.46, 0.56, 0.63, and 0.93, respectively) were generally larger than most literature estimates. Estimated genetic correlations of marbling score with ADG, percentage shrink, loin muscle area, percentage kidney, pelvic, and heart fat, USDA yield grade, cutability, retail yield, and carcass hump height were 0.28, 0.49, 0.44, 0.27, 0.45, -0.43, 0.27, and 0.43, respectively. Results indicate that sufficient genetic variation exists within the Brahman breed for design and implementation of effective selection programs for important carcass quality and yield traits. PMID:12008662

  20. Estimated genetic parameters for carcass traits of Brahman cattle.

    Science.gov (United States)

    Riley, D G; Chase, C C; Hammond, A C; West, R L; Johnson, D D; Olson, T A; Coleman, S W

    2002-04-01

    Heritabilities and genetic and phenotypic correlations were estimated from feedlot and carcass data collected from Brahman calves (n = 504) in central Florida from 1996 to 2000. Data were analyzed using animal models in MTDFREML. Models included contemporary group (n = 44; groups of calves of the same sex, fed in the same pen, slaughtered on the same day) as a fixed effect and calf age in days at slaughter as a continuous variable. Estimated feedlot trait heritabilities were 0.64, 0.67, 0.47, and 0.26 for ADG, hip height at slaughter, slaughter weight, and shrink. The USDA yield grade estimated heritability was 0.71; heritabilities for component traits of yield grade, including hot carcass weight, adjusted 12th rib backfat thickness, loin muscle area, and percentage kidney, pelvic, and heart fat were 0.55, 0.63, 0.44, and 0.46, respectively. Heritability estimates for dressing percentage, marbling score, USDA quality grade, cutability, retail yield, and carcass hump height were 0.77, 0.44, 0.47, 0.71, 0.5, and 0.54, respectively. Estimated genetic correlations of adjusted 12th rib backfat thickness with ADG, slaughter weight, marbling score, percentage kidney, pelvic, and heart fat, and yield grade (0.49, 0.46, 0.56, 0.63, and 0.93, respectively) were generally larger than most literature estimates. Estimated genetic correlations of marbling score with ADG, percentage shrink, loin muscle area, percentage kidney, pelvic, and heart fat, USDA yield grade, cutability, retail yield, and carcass hump height were 0.28, 0.49, 0.44, 0.27, 0.45, -0.43, 0.27, and 0.43, respectively. Results indicate that sufficient genetic variation exists within the Brahman breed for design and implementation of effective selection programs for important carcass quality and yield traits.