WorldWideScience

Sample records for biased parameter estimates

  1. Adaptive Unified Biased Estimators of Parameters in Linear Model

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Li-xing Zhu

    2004-01-01

    To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.

  2. Biases on cosmological parameter estimators from galaxy cluster number counts

    CERN Document Server

    Penna-Lima, M; Wuensche, C A

    2013-01-01

    The abundance of galaxy clusters is becoming a standard cosmological probe. In particular, Sunyaev-Zel'dovich (SZ) surveys are promising probes of the Dark Energy (DE) equation of state (eqos), given their ability to find distant clusters and provide estimates for their mass. However, current SZ catalogs contain tens to hundreds of objects. In this case, it is not guaranteed that maximum likelihood (ML) estimators of cosmological parameters are unbiased. In this work we study estimators from cluster abundance for some cosmological parameters. We derive an unbinned likelihood for cluster abundance, showing that it is equivalent to the one commonly used in the literature. We use the Monte Carlo (MC) approach to determine the presence of bias using this likelihood and its behavior with both area and depth of the survey, and the number of cosmological parameters fitted simultaneously. Assuming perfect knowledge on mass and redshift, we obtain that some estimators have non negligible biases. For example, the bias ...

  3. Stealth Bias in Gravitational-Wave Parameter Estimation

    CERN Document Server

    Vallisneri, Michele

    2013-01-01

    Inspiraling binaries of compact objects are primary targets for current and future gravitational-wave observatories. Waveforms computed in General Relativity are used to search for these sources, and will probably be used to extract source parameters from detected signals. However, if a different theory of gravity happens to be correct in the strong-field regime, source-parameter estimation may be affected by a fundamental bias: that is, by systematic errors induced due to the use of waveforms derived in the incorrect theory. If the deviations from General Relativity are not large enough to be detectable on their own and yet these systematic errors remain significant (i.e., larger than the statistical uncertainties in parameter estimation), fundamental bias cannot be corrected in a single observation, and becomes stealth bias. In this article we develop a scheme to determine in which cases stealth bias could be present in gravitational-wave astronomy. For a given observation, the answer depends on the detecti...

  4. A symptotic Bias for GMM and GEL Estimators with Estimated Nuisance Parameter

    OpenAIRE

    Newey, Whitney K.; Joaquim J. S. Ramalho; Smith, Richard J.

    2003-01-01

    This papers studies and compares the asymptotic bias of GMM and generalized empirical likelihood (GEL) estimators in the presence of estimated nuisance parameters. We consider cases in which the nuisance parameter is estimated from independent and identical samples. A simulation experiment is conducted for covariance structure models. Empirical likelihood offers much reduced mean and median bias, root mean squared error and mean absolute error, as compared with two-step GMM and other GEL meth...

  5. BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Xu Benlian; Wang Zhiquan

    2007-01-01

    According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.

  6. Bootstrap Co-integration Rank Testing: The Effect of Bias-Correcting Parameter Estimates

    OpenAIRE

    Cavaliere, Giuseppe; Taylor, A. M. Robert; Trenkler, Carsten

    2013-01-01

    In this paper we investigate bootstrap-based methods for bias-correcting the first-stage parameter estimates used in some recently developed bootstrap implementations of the co-integration rank tests of Johansen (1996). In order to do so we adapt the framework of Kilian (1998) which estimates the bias in the original parameter estimates using the average bias in the corresponding parameter esti- mates taken across a large number of auxiliary bootstrap replications. A number of possible imp...

  7. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L.F. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore); Xie, M. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Tang, L.C. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)

    2006-08-15

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent.

  8. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  9. Parameter Estimation with BEAMS in the presence of biases and correlations

    CERN Document Server

    Newling, James; Hlozek, Renée; Kunz, Martin; Smith, Mathew; Varughese, Melvin

    2011-01-01

    The original formulation of BEAMS - Bayesian Estimation Applied to Multiple Species - showed how to use a dataset contaminated by points of multiple underlying types to perform unbiased parameter estimation. An example is cosmological parameter estimation from a photometric supernova sample contaminated by unknown Type Ibc and II supernovae. Where other methods require data cuts to increase purity, BEAMS uses all of the data points in conjunction with their probabilities of being each type. Here we extend the BEAMS formalism to allow for correlations between the data and the type probabilities of the objects as can occur in realistic cases. We show with simple simulations that this extension can be crucial, providing a 50% reduction in parameter estimation variance when such correlations do exist. We then go on to perform tests to quantify the importance of the type probabilities, one of which illustrates the effect of biasing the probabilities in various ways. Finally, a general presentation of the selection...

  10. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    Science.gov (United States)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  11. Estimating and assessing Galileo navigation system satellite and receiver differential code biases using the ionospheric parameter and differential code bias joint estimation approach with multi-GNSS observations

    Science.gov (United States)

    Xue, Junchen; Song, Shuli; Liao, Xinhao; Zhu, Wenyao

    2016-04-01

    With the increased number of Galileo navigation satellites joining the Global Navigation Satellite Systems (GNSS) service, there is a strong need for estimating their differential code biases (DCBs) for high-precision GNSS applications. There have been studies for estimating DCBs based on an external global ionospheric model (GIM) proposed by Montenbruck et al. (2014). In this study, we take a different approach by joining the construction of a GIM and estimating DCB together with multi-GNSS observations, including GPS, the BeiDou navigation system, and the Galileo navigation system (GAL). This approach takes full advantage of the collective strength of the individual systems while maintaining high solution consistency. Daily GAL DCBs were estimated simultaneously with ionospheric model parameters from 3 months' multi-GNSS observations. The stability of the resulting GAL DCB estimates was analyzed in detail. It was found that the standard deviations (STDs) of all satellite DCBs were less than 0.17 ns. For GAL receivers, the STDs were greater than for the satellites, with most values <2 ns. Comparison of the statistics of time-ranged stability of satellite DCBs over different time intervals revealed that the difference in STD between 28 and 7 day intervals was small, with the maximum not exceeding 0.01 ns. In almost all cases, the difference in GAL satellite DCBs between two consecutive days was <0.8 ns. The main conclusion is that based on the stability of the GAL DCBs, only occasional calibration is required. Furthermore, the 30 day-averaged satellite DCBs may satisfy the requirement of high-precision applications depending on the GAL satellite DCBs.

  12. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  13. Bias and Systematic Change in the Parameter Estimates of Macro-Level Diffusion Models

    OpenAIRE

    Christophe Van den Bulte; Lilien, Gary L.

    1997-01-01

    Studies estimating the Bass model and other macro-level diffusion models with an unknown ceiling feature three curious empirical regularities: (i) the estimated ceiling is often close to the cumulative number of adopters in the last observation period, (ii) the estimated coefficient of social contagion or imitation tends to decrease as one adds later observations to the data set, and (iii) the estimated coefficient of social contagion or imitation tends to decrease systematically as the estim...

  14. Correcting cosmological parameter biases for all redshift surveys induced by estimating and reweighting redshift distributions

    CERN Document Server

    Rau, Markus Michael; Paech, Kerstin; Seitz, Stella

    2016-01-01

    Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by common suboptimal binning techniques and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to $6 \\, \\sigma$ for the dark energy equation of state parameter $w$. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalised resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surve...

  15. How serious can the stealth bias be in gravitational wave parameter estimation?

    CERN Document Server

    Vitale, Salvatore

    2013-01-01

    The upcoming direct detection of gravitational waves will open a window to probing the strong-field regime of general relativity (GR). As a consequence, waveforms that include the presence of deviations from GR have been developed (e.g. in the parametrized post-Einsteinian approach). TIGER, a data analysis pipeline which builds Bayesian evidence to support or question the validity of GR, has been written and tested. In particular, it was shown recently that data from the LIGO and Virgo detectors will allow to detect deviations from GR smaller than can be probed with Solar System tests and pulsar timing measurements or not accessible with conventional tests of GR. However, evidence from several detections is required before a deviation from GR can be confidently claimed. An interesting consequence is that, should GR not be the correct theory of gravity in its strong field regime, using standard GR templates for the matched filter analysis of interferometer data will introduce biases in the gravitational wave m...

  16. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J;

    2007-01-01

    selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D....... melanogaster lineage has experienced a reduction in the selection for optimal codon usage. However, the D. melanogaster lineage has also experienced a change in the biological mutation rates relative to D. simulans, in particular, a relative reduction in the mutation rate from A to G and an increase in the...... mutation rate from C to T. However, neither a reduction in the strength of selection nor a change in the mutational pattern can alone explain all of the data observed in the D. melanogaster lineage. For example, we also confirm previous results showing that the Notch locus has experienced positive...

  17. Bootstrap bias-adjusted GMM estimators

    OpenAIRE

    Ramalho, Joaquim J.S.

    2005-01-01

    The ability of six alternative bootstrap methods to reduce the bias of GMM parameter estimates is examined in an instrumental variable framework using Monte Carlo analysis. Promising results were found for the two bootstrap estimators suggested in the paper.

  18. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.

    2008-07-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.

  19. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    International Nuclear Information System (INIS)

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency

  20. Nonlinear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    CERN Document Server

    Hamann, Jan; Melchiorri, Alessandro; Wong, Yvonne Y Y

    2008-01-01

    We explore and compare the performances of two nonlinear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-LCDM cosmologies. The first model is the well known Q model, first applied in the analysis of 2dFGRS data. The second, the P model, is inspired by the halo model, in which nonlinear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while both models perform equally well in providing adequate correction for a range of galaxy clustering data in standard LCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalisation, a technique sometime...

  1. Recursive bias estimation for high dimensional smoothers

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nicolas W [Los Alamos National Laboratory; Matzner-lober, Eric [UHB, FRANCE; Cornillon, Pierre - Andre [INRA

    2008-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  2. The estimation method of GPS instrumental biases

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.

  3. Spatial Bias in Field-Estimated Unsaturated Hydraulic Properties

    Energy Technology Data Exchange (ETDEWEB)

    HOLT,ROBERT M.; WILSON,JOHN L.; GLASS JR.,ROBERT J.

    2000-12-21

    Hydraulic property measurements often rely on non-linear inversion models whose errors vary between samples. In non-linear physical measurement systems, bias can be directly quantified and removed using calibration standards. In hydrologic systems, field calibration is often infeasible and bias must be quantified indirectly. We use a Monte Carlo error analysis to indirectly quantify spatial bias in the saturated hydraulic conductivity, K{sub s}, and the exponential relative permeability parameter, {alpha}, estimated using a tension infiltrometer. Two types of observation error are considered, along with one inversion-model error resulting from poor contact between the instrument and the medium. Estimates of spatial statistics, including the mean, variance, and variogram-model parameters, show significant bias across a parameter space representative of poorly- to well-sorted silty sand to very coarse sand. When only observation errors are present, spatial statistics for both parameters are best estimated in materials with high hydraulic conductivity, like very coarse sand. When simple contact errors are included, the nature of the bias changes dramatically. Spatial statistics are poorly estimated, even in highly conductive materials. Conditions that permit accurate estimation of the statistics for one of the parameters prevent accurate estimation for the other; accurate regions for the two parameters do not overlap in parameter space. False cross-correlation between estimated parameters is created because estimates of K{sub s} also depend on estimates of {alpha} and both parameters are estimated from the same data.

  4. Bias in parametric estimation: Reduction and useful side-effects

    OpenAIRE

    Kosmidis, I.

    2014-01-01

    The bias of an estimator is defined as the difference of its expected value from the parameter to be estimated, where the expectation is with respect to the model. Loosely speaking, small bias reflects the desire that if an experiment is repeated indefinitely then the average of all the resultant estimates will be close to the parameter value that is estimated. The current article is a review of the still-expanding repository of methods that have been developed to reduce bias in the estimatio...

  5. A generic algorithm for reducing bias in parametric estimation

    OpenAIRE

    Kosmidis, I.; Firth, D

    2010-01-01

    A general iterative algorithm is developed for the computation of reduced-bias parameter estimates in regular statistical models through adjustments to the score function. The algorithm unifies and provides appealing new interpretation for iterative methods that have been published previously for some specific model classes. The new algorithm can use fully be viewed as a series of iterative bias corrections, thus facilitating the adjusted score approach to bias reduction in any model for whic...

  6. Statistical framework for estimating GNSS bias

    Science.gov (United States)

    Vierinen, Juha; Coster, Anthea J.; Rideout, William C.; Erickson, Philip J.; Norberg, Johannes

    2016-03-01

    We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line-integrated electron densities (total electron content: TEC) that are scaled to equivalent vertical integrated densities. The spatiotemporal variability, instrumentation-dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual-frequency GNSS systems, such as GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema). The use of the framework is demonstrated in practice through several examples. A specific implementation of the methods presented here is used to compute GPS receiver biases for measurements in the MIT Haystack Madrigal distributed database system. Results of the new algorithm are compared with the current MIT Haystack Observatory MAPGPS (MIT Automated Processing of GPS) bias determination algorithm. The new method is found to produce estimates of receiver bias that have reduced day-to-day variability and more consistent coincident vertical TEC values.

  7. Statistical framework for estimating GNSS bias

    CERN Document Server

    Vierinen, Juha; Rideout, William C; Erickson, Philip J; Norberg, Johannes

    2015-01-01

    We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line integrated electron densities (TEC) that are scaled to equivalent vertical integrated densities. The spatio-temporal variability, instrumentation dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual frequency GNSS s...

  8. Parameter Estimation Through Ignorance

    CERN Document Server

    Du, Hailiang

    2015-01-01

    Dynamical modelling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A new relatively simple method of parameter estimation for nonlinear systems is presented, based on variations in the accuracy of probability forecasts. It is illustrated on the Logistic Map, the Henon Map and the 12-D Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The new method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This new approach is easier to implement in practice than alter...

  9. Parameter estimation through ignorance.

    Science.gov (United States)

    Du, Hailiang; Smith, Leonard A

    2012-07-01

    Dynamical modeling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A relatively simple method of parameter estimation for nonlinear systems is introduced, based on variations in the accuracy of probability forecasts. It is illustrated on the logistic map, the Henon map, and the 12-dimensional Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This approach is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors or the ability of the model to shadow the observations. Direct measures of inadequacy in the model, the "implied ignorance," and the information deficit are introduced. PMID:23005513

  10. Phenological Parameters Estimation Tool

    Science.gov (United States)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  11. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  12. Revisiting Cosmological parameter estimation

    CERN Document Server

    Prasad, Jayanti

    2014-01-01

    Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...

  13. Elimination of Estimation biases in the Software Development

    Directory of Open Access Journals (Sweden)

    Thamarai . I.

    2015-04-01

    Full Text Available The software effort estimations are usually too low and the prediction is also a very difficult task as software is intangible in nature. Also the estimation is based on the parameters that are usually partial in nature. It is an important management activity. Despite much research in this area, the accuracy of effort estimation is very low. This results in poor project planning and failure of many software projects. One of the reasons for this poor estimation is that the estimation given by the software developers are affected by some information which do not have any relevance to the calculation of effort. To avoid this, we have proposed a new methodology in which we analyze the relationship between the estimation bias and the various features of developers such as the role in the company, thinking style, experience, education, software development skills, etc. It is found that the estimation bias increases with higher levels of interdependence.

  14. A MORET tool to assist code bias estimation

    International Nuclear Information System (INIS)

    This new Graphical User Interface (GUI) developed in JAVA is one of the post-processing tools for MORET4 code. It aims to help users to estimate the importance of the keff bias due to the code in order to better define the upper safety limit. Moreover, it allows visualizing the distance between an actual configuration case and evaluated critical experiments. This tool depends on a validated experiments database, on sets of physical parameters and on various statistical tools allowing interpolating the calculation bias of the database or displaying the projections of experiments on a reduced base of parameters. The development of this tool is still in progress. (author)

  15. Correcting for bias in estimation of quantitative trait loci effects

    Directory of Open Access Journals (Sweden)

    Ron Micha

    2005-09-01

    Full Text Available Abstract Estimates of quantitative trait loci (QTL effects derived from complete genome scans are biased, if no assumptions are made about the distribution of QTL effects. Bias should be reduced if estimates are derived by maximum likelihood, with the QTL effects sampled from a known distribution. The parameters of the distributions of QTL effects for nine economic traits in dairy cattle were estimated from a daughter design analysis of the Israeli Holstein population including 490 marker-by-sire contrasts. A separate gamma distribution was derived for each trait. Estimates for both the α and β parameters and their SE decreased as a function of heritability. The maximum likelihood estimates derived for the individual QTL effects using the gamma distributions for each trait were regressed relative to the least squares estimates, but the regression factor decreased as a function of the least squares estimate. On simulated data, the mean of least squares estimates for effects with nominal 1% significance was more than twice the simulated values, while the mean of the maximum likelihood estimates was slightly lower than the mean of the simulated values. The coefficient of determination for the maximum likelihood estimates was five-fold the corresponding value for the least squares estimates.

  16. Simultaneous quaternion estimation (QUEST) and bias determination

    Science.gov (United States)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  17. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    Directory of Open Access Journals (Sweden)

    Lyle D. Broemeling

    2011-12-01

    Full Text Available This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests.

  18. Blind estimation of compartmental model parameters

    International Nuclear Information System (INIS)

    Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions - the tissue time-activity curves - are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min-1 and washout rates between 0.2 and 1.0 min-1. The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies

  19. Blind estimation of compartmental model parameters.

    Science.gov (United States)

    Di Bella, E V; Clackdoyle, R; Gullberg, G T

    1999-03-01

    Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions--the tissue time-activity curves--are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min(-1) and washout rates between 0.2 and 1.0 min(-1). The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies

  20. Estimating Ancestral Population Parameters

    OpenAIRE

    Wakeley, J.; Hey, J.

    1997-01-01

    The expected numbers of different categories of polymorphic sites are derived for two related models of population history: the isolation model, in which an ancestral population splits into two descendents, and the size-change model, in which a single population undergoes an instantaneous change in size. For the isolation model, the observed numbers of shared, fixed, and exclusive polymorphic sites are used to estimate the relative sizes of the three populations, ancestral plus two descendent...

  1. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  2. Recursive bias estimation for high dimensional regression smoothers

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nicolas W [Los Alamos National Laboratory; Cornillon, Pierre - Andre [AGROSUP, FRANCE; Matzner - Lober, Eric [UNIV OF RENNES, FRANCE

    2009-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  3. A Polynomial Prediction Filter Method for Estimating Multisensor Dynamically Varying Biases

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The estimation of the sensor measurement biases in a multisensor system is vital for the sensor data fusion. A solution is provided for the estimation of dynamically varying multiple sensor biases without any knowledge of the dynamic bias model parameters. It is shown that the sensor bias pseudomeasurement can be dynamically obtained via a parity vector. This is accomplished by multiplying the sensor uncalibrated measurement equations by a projection matrix so that the measured variable is eliminated from the equations. Once the state equations of the dynamically varying sensor biases are modeled by a polynomial prediction filter, the dynamically varying multisensor biases can be obtained by Kalman filter. Simulation results validate that the proposed method can estimate the constant biases and dynamic biases of multisensors and outperforms the methods reported in literature.

  4. Variance and bias confidence criteria for ERA modal parameter identification. [Eigensystem Realization Algorithm

    Science.gov (United States)

    Longman, Richard W.; Bergmann, Martin; Juang, Jer-Nan

    1988-01-01

    For the ERA system identification algorithm, perturbation methods are used to develop expressions for variance and bias of the identified modal parameters. Based on the statistics of the measurement noise, the variance results serve as confidence criteria by indicating how likely the true parameters are to lie within any chosen interval about their identified values. This replaces the use of expensive and time-consuming Monte Carlo computer runs to obtain similar information. The bias estimates help guide the ERA user in his choice of which data points to use and how much data to use in order to obtain the best results, performing the trade-off between the bias and scatter. Also, when the uncertainty in the bias is sufficiently small, the bias information can be used to correct the ERA results. In addition, expressions for the variance and bias of the singular values serve as tools to help the ERA user decide the proper modal order.

  5. Toward unbiased estimations of the statefinder parameters

    CERN Document Server

    Aviles, Alejandro; Luongo, Orlando

    2016-01-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cos...

  6. Sensitivity of hydrologic simulations to bias corrected driving parameters

    Science.gov (United States)

    Papadimitriou, Lamprini; Grillakis, Manolis; Koutroulis, Aristeidis; Tsanis, Ioannis

    2016-04-01

    Climate model outputs feature systematic errors and biases that render them unsuitable for direct use by the impact models. To deal with this issue many bias correction techniques have been developed to adjust the modelled variables against observations. For the most common applications adjustment concerns only precipitation and temperature whilst for others all the driving parameters (including radiation, wind speed, humidity, air pressure) are bias adjusted. Bias adjusting only part of the variables required as biophysical model input could affect the physical consistency among input variables and is poorly studied. It is important to determine and quantify the effect that bias adjusting each climate variable has on the impact model's simulation and identify parameters that could be treated as raw outputs for specific model applications. In this work, the sensitivity of climate simulations to bias adjusted driving parameters is tested by conducting a series of model runs, for which the impact model JULES is forced with: i) not bias corrected input variables, ii) all bias corrected input variables, iii-viii) all input variables bias corrected except for: iii) precipitation, iv) temperature, v) radiation, vi) specific humidity, vii) air pressure and viii) wind speed. This set of runs is conducted for three climate models of different equilibrium climate sensitivity: GFDL-ESM2M, MIROC-ESM-CHEM and IPSL-CM5A-LR. The baseline for the comparison of the experimental runs is a JULES run forced with the WFDEI dataset, the dataset that was used as the observational dataset for adjusting biases. The comparative analysis is performed using the time period 1981-2010 and focusing on output variables of the hydrological cycle (runoff, evapotranspiration, soil moisture).

  7. Estimating and Correcting Bias in Stereo Visual Odometry

    Science.gov (United States)

    Farboud-Sheshdeh, Sara

    Stereo visual odometry (VO) is a common technique for estimating a camera's motion; features are tracked across frames and the pose change is subsequently inferred. This method can play a particularly important role in environments where the global positioning system (GPS) is not available (e.g., Mars rovers). Recently, some authors have noticed a bias in VO position estimates that grows with distance travelled; this can cause the resulting estimate to become highly inaccurate. In this thesis, two effects are identified at play in stereo VO bias: first, the inherent bias in the maximum-likelihood estimation framework, and second, the disparity threshold used to discard far-away and erroneous observations. To estimate the bias, the sigma-point method (with modification) combined with the concept of bootstrap bias estimation is proposed. This novel method achieves similar accuracy to Monte Carlo experiments, but at a fraction of the computational cost. The approach is validated through simulations.

  8. Photo-z Estimation: An Example of Nonparametric Conditional Density Estimation under Selection Bias

    CERN Document Server

    Izbicki, Rafael; Freeman, Peter E

    2016-01-01

    Redshift is a key quantity for inferring cosmological model parameters. In photometric redshift estimation, cosmologists use the coarse data collected from the vast majority of galaxies to predict the redshift of individual galaxies. To properly quantify the uncertainty in the predictions, however, one needs to go beyond standard regression and instead estimate the full conditional density f(z|x) of a galaxy's redshift z given its photometric covariates x. The problem is further complicated by selection bias: usually only the rarest and brightest galaxies have known redshifts, and these galaxies have characteristics and measured covariates that do not necessarily match those of more numerous and dimmer galaxies of unknown redshift. Unfortunately, there is not much research on how to best estimate complex multivariate densities in such settings. Here we describe a general framework for properly constructing and assessing nonparametric conditional density estimators under selection bias, and for combining two o...

  9. Why is "S" a Biased Estimate of [sigma]?

    Science.gov (United States)

    Sanqui, Jose Almer T.; Arnholt, Alan T.

    2011-01-01

    This article describes a simulation activity that can be used to help students see that the estimator "S" is a biased estimator of [sigma]. The activity can be implemented using either a statistical package such as R, Minitab, or a Web applet. In the activity, the students investigate and compare the bias of "S" when sampling from different…

  10. Recursive bias estimation and L2 boosting

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nicolas W [Los Alamos National Laboratory; Cornillon, Pierre - Andre [INRA, FRANCE; Matzner - Lober, Eric [RENNE, FRANCE

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  11. Bayesian parameter estimation for effective field theories

    CERN Document Server

    Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A

    2015-01-01

    We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  12. Bayesian parameter estimation for effective field theories

    Science.gov (United States)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  13. A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator

    Directory of Open Access Journals (Sweden)

    Munir Ahmed

    2016-06-01

    Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.

  14. Bias in Estimation and Hypothesis Testing of Correlation

    OpenAIRE

    Zimmerman D. W.; Zumbo B. D.; Williams R. H.

    2003-01-01

    This study examined bias in the sample correlation coefficient, r, and its correction by unbiased estimators. Computer simulations revealed that the expected value of correlation coefficients in samples from a normal population is slightly less than the population correlation, ρ, and that the bias is almost eliminated by an estimator suggested by R.A. Fisher and is more completely eliminated by a related estimator recommended by Olkin and Pratt. Transfor...

  15. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  16. Parameter Estimation Using VLA Data

    Science.gov (United States)

    Venter, Willem C.

    The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters

  17. GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION

    Science.gov (United States)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by

  18. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández; Canteli, Alfonso Fernández

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF...... matrix assembled from modal parameters and the experimental responses recorded using standard sensors, is presented. The method implies the inversion of the FRF which, in general, is not full rank matrix due to the truncation of the modal space. Furthermore, some ecommendations are included to improve...

  19. Uniform bias study and Bahadur representation for local polynomial estimators of the conditional quantile function

    OpenAIRE

    Guerre, Emmanuel; Sabbah, Camille

    2011-01-01

    This paper investigates the bias and the weak Bahadur representation of a local polynomial estimator of the conditional quantile function and its derivatives. The bias and Bahadur remainder term are studied uniformly with respect to the quantile level, the covariates and the smoothing parameter. The order of the local polynomial estimator can be higher than the differentiability order of the conditional quantile function. Applications of the results deal with global optimal consistency rates ...

  20. Cosmological parameter estimation: impact of CMB aberration

    International Nuclear Information System (INIS)

    The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles alm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation

  1. Bias-corrected estimation of stable tail dependence function

    DEFF Research Database (Denmark)

    Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri;

    2016-01-01

    We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where a...

  2. Estimation and adjustment of self-selection bias in volunteer panel web surveys

    Science.gov (United States)

    Niu, Chengying

    2016-06-01

    By using propensity score matching method of random sample, we matched simple random sample units and volunteer panel Web survey sample units based on the equal or similar propensity score. The unbiased estimators of the population parameters are constructed by using the matching simple random sample, and the self-selection bias is estimated. We propose propensity score weighted and matching sample post stratification weighted methods to estimate the population parameters, and the self-selection bias in volunteer panel Web Surveys are adjusted.

  3. Assessment of bias in US waterfowl harvest estimates

    Science.gov (United States)

    Padding, Paul I.; Royle, J. Andrew

    2012-01-01

    Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.

  4. Estimation of Synchronous Machine Parameters

    Directory of Open Access Journals (Sweden)

    Oddvar Hallingstad

    1980-01-01

    Full Text Available The present paper gives a short description of an interactive estimation program based on the maximum likelihood (ML method. The program may also perform identifiability analysis by calculating sensitivity functions and the Hessian matrix. For the short circuit test the ML method is able to estimate the q-axis subtransient reactance x''q, which is not possible by means of the conventional graphical method (another set of measurements has to be used. By means of the synchronization and close test, the ML program can estimate the inertial constant (M, the d-axis transient open circuit time constant (T'do, the d-axis subtransient o.c.t.c (T''do and the q-axis subtransient o.c.t.c (T''qo. In particular, T''qo is difficult to estimate by any of the methods at present in use. Parameter identifiability is thoroughly examined both analytically and by numerical methods. Measurements from a small laboratory machine are used.

  5. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2011-01-01

    Parameter Estimation and Inverse Problems, 2e provides geoscience students and professionals with answers to common questions like how one can derive a physical model from a finite set of observations containing errors, and how one may determine the quality of such a model. This book takes on these fundamental and challenging problems, introducing students and professionals to the broad range of approaches that lie in the realm of inverse theory. The authors present both the underlying theory and practical algorithms for solving inverse problems. The authors' treatment is approp

  6. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    Science.gov (United States)

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  7. Bias correction of satellite rainfall estimation using a radar-gauge product

    Directory of Open Access Journals (Sweden)

    K. Tesfagiorgis

    2010-11-01

    Full Text Available Satellite rainfall estimates can be used in operational hydrologic prediction, but are prone to systematic errors. The goal of this study is to seamlessly blend a radar-gauge product with a corrected satellite product that fills gaps in radar coverage. To blend different rainfall products, they should have similar bias features. The paper presents a pixel by pixel method, which aims to correct biases in hourly satellite rainfall products using a radar-gauge rainfall product. Bias factors are calculated for corresponding rainy pixels, and a desired number of them are randomly selected for the analysis. Bias fields are generated using the selected bias factors. The method takes into account spatial variation and random errors in biases. Bias field parameters were determined on a daily basis using the Shuffled Complex Evolution optimization algorithm. To include more sources of errors, ensembles of bias factors were generated and applied before bias field generation. The procedure of the method was demonstrated using a satellite and a radar-gauge rainfall data for several rainy events in 2006 for the Oklahoma region. The method was compared with bias corrections using interpolation without ensembles, the ratio of mean and maximum ratio. Results show the method outperformed the other techniques such as mean ratio, maximum ratio and bias field generation by interpolation.

  8. Attitude and gyro bias estimation for a VTOL UAV

    OpenAIRE

    METNI, N; PFLIMLIN, JM; Hamel, T.; SOUERES, P

    2006-01-01

    In this paper, a nonlinear complementary filter (x-estimator) is presented to estimate the attitude of a vertical take off and landing unmanned aerial vehicle (VTOL UAV). The measurements are taken from a low-cost IMU (inertial measurement unit) which consists of 3-axis accelerometers and 3-axis gyroscopes. The gyro bias are estimated online. A second nonlinear complementary filter (z-estimator) which combines 3-axis gyroscope readings with 3-axis magnetometer measurements, is also designed. ...

  9. Weighted Mixed Regression Estimation Under Biased Stochastic Restrictions

    OpenAIRE

    ---, Shalabh; Heumann, Christian

    2007-01-01

    The paper considers the construction of estimators of regression coefficients in a linear regression model when some stochastic and biased apriori information is available. Such apriori information is framed as stochastic restrictions. The dominance conditions of the estimators are derived under the criterion of mean squared error matrix.

  10. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  11. Bayesian estimation of one-parameter qubit gates

    OpenAIRE

    Teklu, Berihu; Olivares, Stefano; Paris, Matteo G. A.

    2008-01-01

    We address estimation of one-parameter unitary gates for qubit systems and seek for optimal probes and measurements. Single- and two-qubit probes are analyzed in details focusing on precision and stability of the estimation procedure. Bayesian inference is employed and compared with the ultimate quantum limits to precision, taking into account the biased nature of Bayes estimator in the non asymptotic regime. Besides, through the evaluation of the asymptotic a posteriori distribution for the ...

  12. SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION

    Science.gov (United States)

    Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...

  13. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii...... spatial scales. At full-scale wastewater treatment plants (WWTPs),mechanistic modelling using the ASM framework and concept (e.g. Henze et al., 2000) has become an important part of the engineering toolbox for process engineers. It supports plant design, operation, optimization and control applications......). Models have also been used as an integral part of the comprehensive analysis and interpretation of data obtained from a range of experimental methods from the laboratory, as well as pilot-scale studies to characterise and study wastewater treatment plants. In this regard, models help to properly explain...

  14. Weak Lensing Peak Finding: Estimators, Filters, and Biases

    CERN Document Server

    Schmidt, Fabian

    2010-01-01

    Large catalogs of shear-selected peaks have recently become a reality. In order to properly interpret the abundance and properties of these peaks, it is necessary to take into account the effects of the clustering of source galaxies, among themselves and with the lens. In addition, the preferred selection of lensed galaxies in a flux- and size-limited sample leads to fluctuations in the apparent source density which correlate with the lensing field (lensing bias). In this paper, we investigate these issues for two different choices of shear estimators which are commonly in use today: globally-normalized and locally-normalized estimators. While in principle equivalent, in practice these estimators respond differently to systematic effects such as lensing bias and cluster member dilution. Furthermore, we find that which estimator is statistically superior depends on the specific shape of the filter employed for peak finding; suboptimal choices of the estimator+filter combination can result in a suppression of t...

  15. A Method for Estimating BeiDou Inter-frequency Satellite Clock Bias

    Directory of Open Access Journals (Sweden)

    LI Haojun

    2016-02-01

    Full Text Available A new method for estimating the BeiDou inter-frequency satellite clock bias is proposed, considering the shortage of the current methods. The constant and variable parts of the inter-frequency satellite clock bias are considered in the new method. The data from 10 observation stations are processed to validate the new method. The characterizations of the BeiDou inter-frequency satellite clock bias are also analyzed using the computed results. The results of the BeiDou inter-frequency satellite clock bias indicate that it is stable in the short term. The estimated BeiDou inter-frequency satellite clock bias results are molded. The model results show that the 10 parameters of model for each satellite can express the BeiDou inter-frequency satellite clock bias well and the accuracy reaches cm level. When the model parameters of the first day are used to compute the BeiDou inter-frequency satellite clock bias of the second day, the accuracy also reaches cm level. Based on the stability and modeling, a strategy for the BeiDou satellite clock service is presented to provide the reference of our BeiDou.

  16. Review Of Parameter Estimation Using Adaptive Filtering

    Directory of Open Access Journals (Sweden)

    LALITA RANI, SHALOO KIKAN

    2013-07-01

    Full Text Available In this paper, a comparative study of different adaptive filter algorithm for channel parameter estimation is described. We presented different parameter estimation approaches of adaptive filtering. An extended Kalman filter is then applied as a near-optimal solution to the adaptive channel parameter estimation problem. Kalman filtering is applied for motion parameters resulting in optimal pose estimation. A parallel Kalman filter is applied for joint estimation of code delay, multipath gains and Doppler shift. In this paper, a complete review of parameter estimation using adaptive filtering is explained.

  17. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  18. Impact of Baryonic Processes on Weak Lensing Cosmology: Higher-Order Statistics and Parameter Bias

    CERN Document Server

    Osato, Ken; Yoshida, Naoki

    2015-01-01

    We study the impact of baryonic physics on cosmological parameter estimation with weak lensing surveys. We run a set of cosmological hydrodynamics simulations with different galaxy formation models. We then perform ray-tracing simulations through the total matter density field to generate 100 independent convergence maps of 25 deg$^2$ field-of-view, and use them to examine the ability of the following three lensing statistics as cosmological probes; power spectrum, peak counts, and Minkowski Functionals. For the upcoming wide-field observations such as Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of 1400 deg$^2$, the higher-order statistics provide tight constraints on the matter density, density fluctuation amplitude, and dark energy equation of state, but appreciable parameter bias is induced by the baryonic processes such as gas cooling and stellar feedback. When we use power spectrum, peak counts, and Minkowski Functionals, the relative bias in the dark energy equation of state parameter $w$ ...

  19. Cosmological Parameters Degeneracies and Non-Gaussian Halo Bias

    CERN Document Server

    Carbone, Carmelita; Verde, Licia

    2010-01-01

    We study the impact of the cosmological parameters uncertainties on the measurements of primordial non-Gaussianity through the large-scale non-Gaussian halo bias effect. While this is not expected to be an issue for the standard LCDM model, it may not be the case for more general models that modify the large-scale shape of the power spectrum. We consider the so-called local non-Gaussianity model and forecasts from planned surveys, alone and combined with a Planck CMB prior. In particular, we consider EUCLID- and LSST-like surveys and forecast the correlations among $f_{\\rm NL}$ and the running of the spectral index $\\alpha_s$, the dark energy equation of state $w$, the effective sound speed of dark energy perturbations $c^2_s$, the total mass of massive neutrinos $M_\

  20. Estimation of Synchronous Machine Parameters

    OpenAIRE

    Oddvar Hallingstad

    1980-01-01

    The present paper gives a short description of an interactive estimation program based on the maximum likelihood (ML) method. The program may also perform identifiability analysis by calculating sensitivity functions and the Hessian matrix. For the short circuit test the ML method is able to estimate the q-axis subtransient reactance x''q, which is not possible by means of the conventional graphical method (another set of measurements has to be used). By means of the synchronization and close...

  1. Joint MAP bias estimation and data association: simulations

    Science.gov (United States)

    Danford, Scott; Kragel, Bret; Poore, Aubrey

    2007-09-01

    The problem of joint maximum a posteriori (MAP) bias estimation and data association belongs to a class of nonconvex mixed integer nonlinear programming problems. These problems are difficult to solve due to both the combinatorial nature of the problem and the nonconvexity of the objective function or constraints. Algorithms for this class of problems have been developed in a companion paper of the authors. This paper presents simulations that compare the "all-pairs" heuristic, the k-best heuristic, and a partial A*-based branch and bound algorithm. The combination of the latter two algorithms is an excellent candidate for use in a realtime system. For an optimal algorithm that also computes the k-best solutions of the joint MAP bias estimation problem and data association problem, we investigate a branch and bound framework that employs either a depth-first algorithm or an A*-search procedure. In addition, we demonstrate the improvements due to a new gating procedure.

  2. Reduced bias and threshold choice in the extremal index estimation through resampling techniques

    Science.gov (United States)

    Gomes, Dora Prata; Neves, Manuela

    2013-10-01

    In Extreme Value Analysis there are a few parameters of particular interest among which we refer to the extremal index, a measure of extreme events clustering. It is of great interest for initial dependent samples, the common situation in many practical situations. Most semi-parametric estimators of this parameter show the same behavior: nice asymptotic properties but a high variance for small values of k, the number of upper order statistics used in the estimation and a high bias for large values of k. The Mean Square Error, a measure that encompasses bias and variance, usually shows a very sharp plot, needing an adequate choice of k. Using classical extremal index estimators considered in the literature, the emphasis is now given to derive reduced bias estimators with more stable paths, obtained through resampling techniques. An adaptive algorithm for estimating the level k for obtaining a reliable estimate of the extremal index is used. This algorithm has shown good results, but some improvements are still required. A simulation study will illustrate the properties of the estimators and the performance of the adaptive algorithm proposed.

  3. An assessment of Bayesian bias estimator for numerical weather prediction

    Directory of Open Access Journals (Sweden)

    J. Son

    2008-12-01

    Full Text Available Various statistical methods are used to process operational Numerical Weather Prediction (NWP products with the aim of reducing forecast errors and they often require sufficiently large training data sets. Generating such a hindcast data set for this purpose can be costly and a well designed algorithm should be able to reduce the required size of these data sets.

    This issue is investigated with the relatively simple case of bias correction, by comparing a Bayesian algorithm of bias estimation with the conventionally used empirical method. As available forecast data sets are not large enough for a comprehensive test, synthetically generated time series representing the analysis (truth and forecast are used to increase the sample size. Since these synthetic time series retained the statistical characteristics of the observations and operational NWP model output, the results of this study can be extended to real observation and forecasts and this is confirmed by a preliminary test with real data.

    By using the climatological mean and standard deviation of the meteorological variable in consideration and the statistical relationship between the forecast and the analysis, the Bayesian bias estimator outperforms the empirical approach in terms of the accuracy of the estimated bias, and it can reduce the required size of the training sample by a factor of 3. This advantage of the Bayesian approach is due to the fact that it is less liable to the sampling error in consecutive sampling. These results suggest that a carefully designed statistical procedure may reduce the need for the costly generation of large hindcast datasets.

  4. On the estimation and correction of bias in local atrophy estimations using example atrophy simulations.

    Science.gov (United States)

    Sharma, Swati; Rousseau, François; Heitz, Fabrice; Rumbach, Lucien; Armspach, Jean-Paul

    2013-01-01

    Brain atrophy is considered an important marker of disease progression in many chronic neuro-degenerative diseases such as multiple sclerosis (MS). A great deal of attention is being paid toward developing tools that manipulate magnetic resonance (MR) images for obtaining an accurate estimate of atrophy. Nevertheless, artifacts in MR images, inaccuracies of intermediate steps and inadequacies of the mathematical model representing the physical brain volume change, make it rather difficult to obtain a precise and unbiased estimate. This work revolves around the nature and magnitude of bias in atrophy estimations as well as a potential way of correcting them. First, we demonstrate that for different atrophy estimation methods, bias estimates exhibit varying relations to the expected atrophy and these bias estimates are of the order of the expected atrophies for standard algorithms, stressing the need for bias correction procedures. Next, a framework for estimating uncertainty in longitudinal brain atrophy by means of constructing confidence intervals is developed. Errors arising from MRI artifacts and bias in estimations are learned from example atrophy simulations and anatomies. Results are discussed for three popular non-rigid registration approaches with the help of simulated localized brain atrophy in real MR images. PMID:23988649

  5. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  6. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    This paper provides a comprehensive Monte Carlo comparison of different finite-sample bias-correction methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied...... indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...... application to a long annual US Debt/GDP series we consider rolling window estimation of autoregressive models. We find substantial evidence for time-varying persistence and periods of explosiveness during the Civil War and World War II. During the recent years, the series is nearly explosive again. Further...

  7. Improving uncertainty estimation in urban hydrological modeling by statistically describing bias

    Directory of Open Access Journals (Sweden)

    D. Del Giudice

    2013-04-01

    Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and computationally efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis in urban hydrology. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described by means of a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias (due to input and structural errors, and measurement errors. In our case study, the best performing bias description was the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to directly

  8. Estimation of physical parameters in induction motors

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  9. Postprocessing MPEG based on estimated quantization parameters

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2009-01-01

    the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...

  10. Estimation for large non-centrality parameters

    Science.gov (United States)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  11. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  12. Bias, precision, and parameter redundancy in complex multistate models with unobservable states.

    Science.gov (United States)

    Bailey, Larissa L; Converse, Sarah J; Kendall, William L

    2010-06-01

    Multistate mark-recapture models with unobservable states can yield unbiased estimators of survival probabilities in the presence of temporary emigration (i.e., in cases where some individuals are temporarily unavailable for capture). In addition, these models permit the estimation of transition probabilities between states, which may themselves be of interest; for example, when only breeding animals are available for capture. However, parameter redundancy is frequently a problem in these models, yielding biased parameter estimates and influencing model selection. Using numerical methods, we examine complex multistate mark-recapture models involving two observable and two unobservable states. This model structure was motivated by two different biological systems: one involving island-nesting albatross, and another involving pond-breeding amphibians. We found that, while many models are theoretically identifiable given appropriate constraints, obtaining accurate and precise parameter estimates in practice can be difficult. Practitioners should consider ways to increase detection probabilities or adopt robust design sampling in order to improve the properties of estimates obtained from these models. We suggest that investigators interested in using these models explore both theoretical identifiability and possible near-singularity for likely parameter values using a combination of available methods. PMID:20583702

  13. Distributed Parameter Estimation in Probabilistic Graphical Models

    OpenAIRE

    Mizrahi, Yariv Dror; Denil, Misha; De Freitas, Nando

    2014-01-01

    This paper presents foundational theoretical results on distributed parameter estimation for undirected probabilistic graphical models. It introduces a general condition on composite likelihood decompositions of these models which guarantees the global consistency of distributed estimators, provided the local estimators are consistent.

  14. Joint MAP bias estimation and data association: algorithms

    Science.gov (United States)

    Danford, Scott; Kragel, Bret; Poore, Aubrey

    2007-09-01

    The problem of joint maximum a posteriori (MAP) bias estimation and data association belongs to a class of nonconvex mixed integer nonlinear programming problems. These problems are difficult to solve due to both the combinatorial nature of the problem and the nonconvexity of the objective function or constraints. A specific problem that has received some attention in the tracking literature is that of the target object map problem in which one tries match a set of tracks as observed by two different sensors in the presence of biases, which are modeled here as a translation between the track states. The general framework also applies to problems in which the costs are general nonlinear functions of the biases. The goal of this paper is to present a class of algorithms based on the branch and bound framework and the "all-pairs" and k-best heuristics that provide a good initial upper bound for a branch and bound algorithm. These heuristics can be used as part of a real-time algorithm or as part of an "anytime algorithm" within the branch and bound framework. In addition, we consider both the A*-search and depth-first search procedures as well as several efficiency improvements such as gating. While this paper focuses on the algorithms, a second paper will focus on simulations.

  15. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    Science.gov (United States)

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  16. Cosmological parameter estimation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite

  17. Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise

    International Nuclear Information System (INIS)

    Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)

  18. An enhanced algorithm to estimate BDS satellite's differential code biases

    Science.gov (United States)

    Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei

    2016-02-01

    This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.

  19. The power spectrum of systematics in cosmic shear tomography and the bias on cosmological parameters

    CERN Document Server

    Cardone, V F; Calabrese, E; Galli, S; Huang, Z; Maoli, R; Melchiorri, A; Scaramella, R

    2013-01-01

    Cosmic shear tomography has emerged as one of the most promising tools to both investigate the nature of dark energy and discriminate between General Relativity and modified gravity theories. In order to successfully achieve these goals, systematics in shear measurements have to be taken into account; their impact on the weak lensing power spectrum has to be carefully investigated in order to estimate the bias induced on the inferred cosmological parameters. To this end, we develop here an efficient tool to compute the power spectrum of systematics by propagating, in a realistic way, shear measurement, source properties and survey setup uncertainties. Starting from analytical results for unweighted moments and general assumptions on the relation between measured and actual shear, we derive analytical expressions for the multiplicative and additive bias, showing how these terms depend not only on the shape measurement errors, but also on the properties of the source galaxies (namely, size, magnitude and spectr...

  20. Estimation of distances to stars with stellar parameters from LAMOST

    CERN Document Server

    Carlin, Jeffrey L; Newberg, Heidi Jo; Beers, Timothy C; Chen, Li; Deng, Licai; Guhathakurta, Puragra; Hou, Jinliang; Hou, Yonghui; Lepine, Sebastien; Li, Guangwei; Luo, A-Li; Smith, Martin C; Wu, Yue; Yang, Ming; Yanny, Brian; Zhang, Haotong; Zheng, Zheng

    2015-01-01

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show ...

  1. State and parameter estimation in bio processes

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)

    1994-12-31

    A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.

  2. Aggregation Bias in Estimating European Money Demand Functions

    OpenAIRE

    Wesche, Katrin

    1996-01-01

    Recently, money demand functions for a group of European countries have been estimated and generally have been found to perform better than most national money demand functions. While parameter equality is a sufficient condition for valid aggregation of linear equations, in money demand estimation often log-linear specifications are used, so that aggregation is in effect nonlinear. This makes the relation between the aggregate and the individual equations more complicated. To investigate if t...

  3. DEB parameters estimation for Mytilus edulis

    Science.gov (United States)

    Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Sousa, T.

    2011-11-01

    The potential of DEB theory to simulate an organism life-cycle has been demonstrated at numerous occasions. However, its applicability requires parameter estimates that are not easily obtained by direct observations. During the last years various attempts were made to estimate the main DEB parameters for bivalve species. The estimation procedure was by then, however, rather ad-hoc and based on additional assumptions that were not always consistent with the DEB theory principles. A new approach has now been developed - the covariation method - based on simultaneous minimization of the weighted sum of squared deviations between data sets and model predictions in one single procedure. This paper presents the implementation of this method to estimate the DEB parameters for the blue mussel Mytilus edulis, using several data sets from the literature. After comparison with previous trials we conclude that the parameter set obtained by the covariation method leads to a better fit between model and observations, with potentially more consistency and robustness.

  4. On Carleman estimates with two large parameters

    International Nuclear Information System (INIS)

    We provide a general framework for the analysis and the derivation of Carleman estimates with two large parameters. For an appropriate form of weight functions strong pseudo-convexity conditions are shown to be necessary and sufficient.

  5. Error covariance calculation for forecast bias estimation in hydrologic data assimilation

    Science.gov (United States)

    Pauwels, Valentijn R. N.; De Lannoy, Gabriëlle J. M.

    2015-12-01

    To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.

  6. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  7. Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.

    2011-01-01

    In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.

  8. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.

  9. Effect of Bias Estimation on Coverage Accuracy of Bootstrap Confidence Intervals for a Probability Density

    OpenAIRE

    Hall, Peter

    1992-01-01

    The bootstrap is a poor estimator of bias in problems of curve estimation, and so bias must be corrected by other means when the bootstrap is used to construct confidence intervals for a probability density. Bias may either be estimated explicitly, or allowed for by undersmoothing the curve estimator. Which of these two approaches is to be preferred? In the present paper we address this question from the viewpoint of coverage accuracy, assuming a given number of derivatives of the unknown den...

  10. LISA parameter estimation using numerical merger waveforms

    CERN Document Server

    Thorpe, J I; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G

    2008-01-01

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of one million Solar masses at a redshift of one were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  11. LISA parameter estimation using numerical merger waveforms

    Energy Technology Data Exchange (ETDEWEB)

    Thorpe, J I; McWilliams, S T; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G, E-mail: James.I.Thorpe@nasa.go [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States)

    2009-05-07

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10{sup 6} M{sub o-dot} at a redshift of z approx 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  12. LISA parameter estimation using numerical merger waveforms

    International Nuclear Information System (INIS)

    Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 106 Mo-dot at a redshift of z ∼ 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.

  13. Parameter Estimation of Noise Corrupted Sinusoids

    CERN Document Server

    O'Brien, Francis J; Johnnie, Nathan

    2011-01-01

    Existing algorithms for fitting the parameters of a sinusoid to noisy discrete time observations are not always successful due to initial value sensitivity and other issues. This paper demonstrates the techniques of FIR filtering, Fast Fourier Transform, and nonlinear least squares minimization as useful in the parameter estimation of amplitude, frequency and phase exemplified for a low-frequency time-delayed sinusoid describing simple harmonic motion. Alternative means are described for estimating frequency and phase angle. An autocorrelation function for harmonic motion is also derived.

  14. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  15. Parameter estimation of the WMTD model

    Institute of Scientific and Technical Information of China (English)

    LUO Ji; QIU Hong-bing

    2009-01-01

    The MTD (mixture transition distribution) model based on Weibull distribution (WMTD model) is proposed in this paper, which is aimed at its parameter estimation. An EM algorithm for estimation is given and shown to work well by some simulations. And bootstrap method is used to obtain confidence regions for the parameters. Finally, the results of a real example--predicting stock prices--show that the WMTD model proposed is able to capture the features of the data from thick-tailed distribution better than GMTD (mixture transition distribution) model.

  16. Hurst Parameter Estimation Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    S..Ledesma-Orozco

    2011-08-01

    Full Text Available The Hurst parameter captures the amount of long-range dependence (LRD in a time series. There are severalmethods to estimate the Hurst parameter, being the most popular: the variance-time plot, the R/S plot, theperiodogram, and Whittle’s estimator. The first three are graphical methods, and the estimation accuracy depends onhow the plot is interpreted and calculated. In contrast, Whittle’s estimator is based on a maximum likelihood techniqueand does not depend on a graph reading; however, it is computationally expensive. A new method to estimate theHurst parameter is proposed. This new method is based on an artificial neural network. Experimental results showthat this method outperforms traditional approaches, and can be used on applications where a fast and accurateestimate of the Hurst parameter is required, i.e., computer network traffic control. Additionally, the Hurst parameterwas computed on series of different length using several methods. The simulation results show that the proposedmethod is at least ten times faster than traditional methods.

  17. Multi-Parameter Estimation for Orthorhombic Media

    KAUST Repository

    Masmoudi, Nabil

    2015-08-19

    Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.

  18. Performance Analysis of Parameter Estimation Using LASSO

    OpenAIRE

    Panahi, Ashkan; Viberg, Mats

    2012-01-01

    The Least Absolute Shrinkage and Selection Operator (LASSO) has gained attention in a wide class of continuous parametric estimation problems with promising results. It has been a subject of research for more than a decade. Due to the nature of LASSO, the previous analyses have been non-parametric. This ignores useful information and makes it difficult to compare LASSO to traditional estimators. In particular, the role of the regularization parameter and super-resolution properties of LASSO h...

  19. Biosorption Parameter Estimation with Genetic Algorithm

    OpenAIRE

    Yung-Tse Hung; Eui Yong Kim; Xiao Feng; Khim Hoong Chu

    2011-01-01

    In biosorption research, a fairly broad range of mathematical models are used to correlate discrete data points obtained from batch equilibrium, batch kinetic or fixed bed breakthrough experiments. Most of these models are inherently nonlinear in their parameters. Some of the models have enjoyed widespread use, largely because they can be linearized to allow the estimation of parameters by least-squares linear regression. Selecting a model for data correlation appears to be dictated by the ea...

  20. Spin bath narrowing with adaptive parameter estimation

    OpenAIRE

    Cappellaro, Paola

    2012-01-01

    We present a measurement scheme capable of achieving the quantum limit of parameter estimation using an adaptive strategy that minimizes the parameter's variance at each step. The adaptive rule we propose makes the scheme robust against errors, in particular imperfect readouts, a critical requirement to extend adaptive schemes from quantum optics to solid-state sensors. Thanks to recent advances in single-shot readout capabilities for electronic spins in the solid state (such as Nitrogen Vaca...

  1. Parameter Estimation of Noise Corrupted Sinusoids

    OpenAIRE

    O'Brien, Jr., W.,; Johnnie, Nathan

    2011-01-01

    Existing algorithms for fitting the parameters of a sinusoid to noisy discrete time observations are not always successful due to initial value sensitivity and other issues. This paper demonstrates the techniques of FIR filtering, Fast Fourier Transform, circular autocorreltion, and nonlinear least squares minimization as useful in the parameter estimation of amplitude, frequency and phase exemplified for a low-frequency time-delayed sinusoid describing simple harmonic motion. Alternative mea...

  2. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  3. Evaluating treatment effectiveness under model misspecification: a comparison of targeted maximum likelihood estimation with bias-corrected matching

    OpenAIRE

    Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...

  4. Estimation of accuracy and bias in genetic evaluations with genetic groups using sampling

    NARCIS (Netherlands)

    Hickey, J.M.; Keane, M.G.; Kenny, D.A.; Cromie, A.R.; Mulder, H.A.; Veerkamp, R.F.

    2008-01-01

    Accuracy and bias of estimated breeding values are important measures of the quality of genetic evaluations. A sampling method that accounts for the uncertainty in the estimation of genetic group effects was used to calculate accuracy and bias of estimated effects. The method works by repeatedly sim

  5. The Use of Propensity Scores and Observational Data to Estimate Randomized Controlled Trial Generalizability Bias

    OpenAIRE

    Pressler, Taylor R.; Kaizar, Eloise E.

    2013-01-01

    While randomized controlled trials (RCT) are considered the “gold standard” for clinical studies, the use of exclusion criteria may impact the external validity of the results. It is unknown whether estimators of effect size are biased by excluding a portion of the target population from enrollment. We propose to use observational data to estimate the bias due to enrollment restrictions, which we term generalizability bias. In this paper we introduce a class of estimators for the generalizabi...

  6. Aquifer parameter estimation from surface resistivity data.

    Science.gov (United States)

    Niwas, Sri; de Lima, Olivar A L

    2003-01-01

    This paper is devoted to the additional use, other than ground water exploration, of surface geoelectrical sounding data for aquifer hydraulic parameter estimation. In a mesoscopic framework, approximated analytical equations are developed separately for saline and for fresh water saturations. A few existing useful aquifer models, both for clean and shaley sandstones, are discussed in terms of their electrical and hydraulic effects, along with the linkage between the two. These equations are derived for insight and physical understanding of the phenomenon. In a macroscopic scale, a general aquifer model is proposed and analytical relations are derived for meaningful estimation, with a higher level of confidence, of hydraulic parameter from electrical parameters. The physical reasons for two different equations at the macroscopic level are explicitly explained to avoid confusion. Numerical examples from existing literature are reproduced to buttress our viewpoint. PMID:12533080

  7. A class of shrinkage estimators for the shape parameter of the Weibull lifetime model

    Directory of Open Access Journals (Sweden)

    Zuhair Alhemyari

    2012-03-01

    Full Text Available In this paper, we propose two classes of shrinkage estimators for the shape parameter of the Weibull distribution in censored samples. The proposed estimators are studied theoretically and have been compared numerically with existing estimators. Computer intensive calculations for bias and relative efficiency show that for, different values of levels of significance and for varying constants involved in the proposed estimators, the proposed testimators fare better than classical and existing estimators

  8. Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.

    Science.gov (United States)

    Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan

    2016-01-01

    This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter. PMID:27598156

  9. Parameter estimation in channel network flow simulation

    Institute of Scientific and Technical Information of China (English)

    Han Longxi

    2008-01-01

    Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.

  10. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  11. Multiple Parameter Estimation With Quantized Channel Output

    CERN Document Server

    Mezghani, Amine; Nossek, Josef A

    2010-01-01

    We present a general problem formulation for optimal parameter estimation based on quantized observations, with application to antenna array communication and processing (channel estimation, time-of-arrival (TOA) and direction-of-arrival (DOA) estimation). The work is of interest in the case when low resolution A/D-converters (ADCs) have to be used to enable higher sampling rate and to simplify the hardware. An Expectation-Maximization (EM) based algorithm is proposed for solving this problem in a general setting. Besides, we derive the Cramer-Rao Bound (CRB) and discuss the effects of quantization and the optimal choice of the ADC characteristic. Numerical and analytical analysis reveals that reliable estimation may still be possible even when the quantization is very coarse.

  12. Sensor Placement for Modal Parameter Subset Estimation

    DEFF Research Database (Denmark)

    Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars

    2016-01-01

    The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency...... responses carry on the selected modal parameter subset is, in some sense, maximized. The approach is validated in the context of a simple 10-DOF mass-spring-damper system by computing the variance of a set of identified modal parameters in a Monte Carlo setting for a set of sensor configurations, whose......). It is shown that the widely used Effective Independence (EI) method, which uses the modal amplitudes as surrogates for the parameters of interest, provides sensor configurations yielding theoretical lower bound variances whose maxima are up to 30 % larger than those obtained by use of the max-min approach....

  13. GOCE gradiometer: estimation of biases and scale factors of all six individual accelerometers by precise orbit determination

    NARCIS (Netherlands)

    Visser, P.N.A.M.

    2008-01-01

    A method has been implemented and tested for estimating bias and scale factor parameters for all six individual accelerometers that will fly on-board of GOCE and together form the so-called gradiometer. The method is based on inclusion of the individual accelerometer observations in precise orbit de

  14. On closure parameter estimation in chaotic systems

    Directory of Open Access Journals (Sweden)

    J. Hakkarainen

    2012-02-01

    Full Text Available Many dynamical models, such as numerical weather prediction and climate models, contain so called closure parameters. These parameters usually appear in physical parameterizations of sub-grid scale processes, and they act as "tuning handles" of the models. Currently, the values of these parameters are specified mostly manually, but the increasing complexity of the models calls for more algorithmic ways to perform the tuning. Traditionally, parameters of dynamical systems are estimated by directly comparing the model simulations to observed data using, for instance, a least squares approach. However, if the models are chaotic, the classical approach can be ineffective, since small errors in the initial conditions can lead to large, unpredictable deviations from the observations. In this paper, we study numerical methods available for estimating closure parameters in chaotic models. We discuss three techniques: off-line likelihood calculations using filtering methods, the state augmentation method, and the approach that utilizes summary statistics from long model simulations. The properties of the methods are studied using a modified version of the Lorenz 95 system, where the effect of fast variables are described using a simple parameterization.

  15. Estimating Production Potentials: Expert Bias in Applied Decision Making

    International Nuclear Information System (INIS)

    A study was conducted to evaluate how workers predict manufacturing production potentials given positively and negatively framed information. Findings indicate the existence of a bias toward positive information and suggest that this bias may be reduced with experience but is never the less maintained. Experts err in the same way non experts do in differentially processing negative and positive information. Additionally, both experts and non experts tend to overestimate production potentials in a positive direction. The authors propose that these biases should be addressed with further research including cross domain analyses and consideration in training, workplace design, and human performance modeling

  16. CosmoSIS: modular cosmological parameter estimation

    CERN Document Server

    Zuntz, Joe; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James

    2014-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis

  17. Measurement Data Modeling and Parameter Estimation

    CERN Document Server

    Wang, Zhengming; Yao, Jing; Gu, Defeng

    2011-01-01

    Measurement Data Modeling and Parameter Estimation integrates mathematical theory with engineering practice in the field of measurement data processing. Presenting the first-hand insights and experiences of the authors and their research group, it summarizes cutting-edge research to facilitate the application of mathematical theory in measurement and control engineering, particularly for those interested in aeronautics, astronautics, instrumentation, and economics. Requiring a basic knowledge of linear algebra, computing, and probability and statistics, the book illustrates key lessons with ta

  18. Clustering of dark matter tracers: renormalizing the bias parameters

    OpenAIRE

    McDonald, Patrick

    2006-01-01

    A commonly used perturbative method for computing large-scale clustering of tracers of mass density, like galaxies, is to model the tracer density field as a Taylor series in the local smoothed mass density fluctuations, possibly adding a stochastic component. I suggest a set of parameter redefinitions, eliminating problematic perturbative correction terms, that should represent a modest improvement, at least, to this method. As presented here, my method can be used to compute the power spect...

  19. Estimating Earthen Dam-Breach Parameters

    Directory of Open Access Journals (Sweden)

    Mahdi Moharrampour

    2013-12-01

    Full Text Available Dam failure leads to release of high volume of water which causes huge waves in downstream. Failure in dam may cause financial loss but life losses depend on the drowned zone, population residing in downstream of the danger zone and warning time. Therefore, it is inevitable to predict dam failure and its resulting dangers to reduce life and financial losses. Flood simulation models simulate flood caused by dam failure (such as DAMBRK and FLDWAV, the resulting outflow in dam failure and its route in downstream of the river. Such models mostly focused on outflow hydrographs. In these models, physical development of failure is not simulated but they describe dam failure process as parametric process which is defined as form of failure, final size and time required for developing it (destruction time. Therefore, dam failure parameters should be estimated for simulating dam failure and applied as input information in simulation model. For this reason, failure of Bidakan earth dam located in Chahar Mahal va Bakhtiari Province has been simulated by estimating parameters of failure, analyzing uncertainty of experimental methods for estimating parameters of failure and applying SMPDBK model.

  20. Taking Variable Correlation into Consideration during Parameter Estimation

    OpenAIRE

    T.J. Santos; Pinto, J C.

    1998-01-01

    Variable correlations are usually neglected during parameter estimation. Very frequently these are gross assumptions and may potentially lead to inadequate interpretation of final estimation results. For this reason, variable correlation and model parameters are sometimes estimated simultaneously in certain parameter estimation procedures. It is shown, however, that usually taking variable correlation into consideration during parameter estimation may be inadequate and unnecessary, unless ind...

  1. Misleading Population Estimates: Biases and Consistency of Visual Surveys and Matrix Modelling in the Endangered Bearded Vulture

    OpenAIRE

    Antoni Margalida; Daniel Oro; Ainara Cortés-Avizanda; Rafael Heredia; Donázar, José A.

    2011-01-01

    Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the "cryptic" fraction of the population) and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (...

  2. A two parameter ratio-product-ratio estimator using auxiliary information

    CERN Document Server

    Chami, Peter S; Thomas, Doneal

    2012-01-01

    We propose a two parameter ratio-product-ratio estimator for a finite population mean in a simple random sample without replacement following the methodology in Ray and Sahai (1980), Sahai and Ray (1980), Sahai and Sahai (1985) and Singh and Ruiz Espejo (2003). The bias and mean square error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean square error than the sample mean, ratio and product estimators. We carry out an application showing that the proposed estimator outperforms the traditional estimators using groundwater data taken from a geological site in the state of Florida.

  3. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  4. Squared visibility estimator. Calibrating biases to reach very high dynamic range

    CERN Document Server

    Perrin, G

    2005-01-01

    In the near infrared where detectors are limited by read-out noise, most interferometers have been operated in wide band in order to benefit from larger photon rates. We analyze in this paper the biases caused by instrumental and turbulent effects to $V^2$ estimators for both narrow and wide band cases. Visibilities are estimated from samples of the interferogram using two different estimators, $V^{2}_1$ which is the classical sum of the squared modulus of Fourier components and a new estimator $V^{2}_2$ for which complex Fourier components are summed prior to taking the square. We present an approach for systematically evaluating the performance and limits of each estimator, and to optimizing observing parameters for each. We include the effects of spectral bandwidth, chromatic dispersion, scan length, and differential piston. We also establish the expression of the Signal-to-Noise Ratio of the two estimators with respect to detector and photon noise. The $V^{2}_1$ estimator is insensitive to dispersion and ...

  5. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  6. Parameter estimation using B-Trees

    DEFF Research Database (Denmark)

    Schmidt, Albrecht; Bøhlen, Michael H.

    2004-01-01

    This paper presents a method for accelerating algorithms for computing common statistical operations like parameter estimation or sampling on B-Tree indexed data; the work was carried out in the context of visualisation of large scientific data sets. The underlying idea is the following: the shape...... of balanced data structures like B-Trees encodes and reflects data semantics according to the balance criterion. For example, clusters in the index attribute are somewhat likely to be present not only on the data or leaf level of the tree but should propagate up into the interior levels. The paper...... also hints at opportunities and limitations of this approach for visualisation of large data sets. The advantages of the method are manifold. Not only does it enable advanced algorithms through a performance boost for basic operations like density estimation, but it also builds on functionality that is...

  7. Parameter Estimation in Active Plate Structures

    DEFF Research Database (Denmark)

    Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.;

    2006-01-01

    In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...... through gradient based optimization techniques, while the second is based on a metamodel of the inverse problem, using artificial neural networks. A numerical higher order finite element laminated plate model is used in both methods and results are compared and discussed through a simulated and an...

  8. Effect of Bias Correction of Satellite-Rainfall Estimates on Runoff Simulations at the Source of the Upper Blue Nile

    Directory of Open Access Journals (Sweden)

    Emad Habib

    2014-07-01

    Full Text Available Results of numerous evaluation studies indicated that satellite-rainfall products are contaminated with significant systematic and random errors. Therefore, such products may require refinement and correction before being used for hydrologic applications. In the present study, we explore a rainfall-runoff modeling application using the Climate Prediction Center-MORPHing (CMORPH satellite rainfall product. The study area is the Gilgel Abbay catchment situated at the source basin of the Upper Blue Nile basin in Ethiopia, Eastern Africa. Rain gauge networks in such area are typically sparse. We examine different bias correction schemes applied locally to the CMORPH product. These schemes vary in the degree to which spatial and temporal variability in the CMORPH bias fields are accounted for. Three schemes are tested: space and time-invariant, time-variant and spatially invariant, and space and time variant. Bias-corrected CMORPH products were used to calibrate and drive the Hydrologiska Byråns Vattenbalansavdelning (HBV rainfall-runoff model. Applying the space and time-fixed bias correction scheme resulted in slight improvement of the CMORPH-driven runoff simulations, but in some instances caused deterioration. Accounting for temporal variation in the bias reduced the rainfall bias by up to 50%. Additional improvements were observed when both the spatial and temporal variability in the bias was accounted for. The rainfall bias was found to have a pronounced effect on model calibration. The calibrated model parameters changed significantly when using rainfall input from gauges alone, uncorrected, and bias-corrected CMORPH estimates. Changes of up to 81% were obtained for model parameters controlling the stream flow volume.

  9. Error and bias in size estimates of whale sharks: implications for understanding demography

    OpenAIRE

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G.

    2016-01-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected ...

  10. Performance of the maximum likelihood estimators for the parameters of multivariate generalized Gaussian distributions

    OpenAIRE

    Bombrun, Lionel; Pascal, Frédéric; Tourneret, Jean-Yves; Berthoumieu, Yannick

    2012-01-01

    This paper studies the performance of the maximum likelihood estimators (MLE) for the parameters of multivariate generalized Gaussian distributions. When the shape parameter belongs to ]0,1[, we have proved that the scatter matrix MLE exists and is unique up to a scalar factor. After providing some elements about this proof, an estimation algorithm based on a Newton-Raphson recursion is investigated. Some experiments illustrate the convergence speed of this algorithm. The bias and consistency...

  11. Fast cosmological parameter estimation using neural networks

    CERN Document Server

    Auld, T; Hobson, M P; Gull, S F

    2006-01-01

    We present a method for accelerating the calculation of CMB power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called CosmoNet, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released Pico algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of CosmoNet by computing CMB power spectra over a box in the parameter space of flat \\Lambda CDM models containing the 3\\sigma WMAP1 confidence region. We also use CosmoNet to compute the WMAP3 likelihood for flat \\Lambda CDM models and show that marginalised posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3% of cosmic variance, and that CosmoNet is \\sim 7 \\times 10^4 faster than CAMB (for flat ...

  12. Health Indicators: Eliminating bias from convenience sampling estimators

    OpenAIRE

    HEDT, Bethany L.; Pagano, Marcello

    2011-01-01

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing abou...

  13. Multifrequency SAR data for estimating hydrological parameters

    International Nuclear Information System (INIS)

    The sensitivity of backscattering coefficients to some geophysical parameters which play a significant role in hydrological processes (vegetation biomass, soil moisture and surface roughness) is discussed. Experimental results show that P-band makes it possible the monitoring of forest biomass, L-band appears to be good for wide-leaf crops, and C- and X-bands for small-leaf crops. Moreover, L-band backscattering makes the highest contribution in estimating soil moisture and surface roughness. The sensitivity to spatial distribution of soil moisture and surface roughness is rather low, since both quantities affect the radar signal. However, observing data collected at different dates and averaged over several fields, the correlation to soil moisture is significant, since the effects of spatial roughness variations are smoothed. The retrieval of both soil moisture and surface roughness has been performed by means of a semiempirical model

  14. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes

    DEFF Research Database (Denmark)

    Wood, Lesley; Egger, Matthias; Gluud, Lise Lotte;

    2008-01-01

    To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome....

  15. An Inhomogeneous Bayesian Texture Model for Spatially Varying Parameter Estimation

    OpenAIRE

    Dharmagunawardhana, Chathurika; Mahmoodi, Sasan; Bennett, Michael; Niranjan, Mahesan

    2014-01-01

    In statistical model based texture feature extraction, features based on spatially varying parameters achieve higher discriminative performances compared to spatially constant parameters. In this paper we formulate a novel Bayesian framework which achieves texture characterization by spatially varying parameters based on Gaussian Markov random fields. The parameter estimation is carried out by Metropolis-Hastings algorithm. The distributions of estimated spatially varying paramete...

  16. Systematic biases on galaxy haloes parameters from Yukawa-like gravitational potentials

    CERN Document Server

    Cardone, V F

    2011-01-01

    A viable alternative to the dark energy as a solution of the cosmic speed up problem is represented by Extended Theories of Gravity. Should this be indeed the case, there will be an impact not only on cosmological scales, but also at any scale, from the Solar System to extragalactic ones. In particular, the gravitational potential can be different from the Newtonian one commonly adopted when computing the circular velocity fitted to spiral galaxies rotation curves. Phenomenologically modelling the modified point mass potential as the sum of a Newtonian and a Yukawa like correction, we simulate observed rotation curves for a spiral galaxy described as the sum of an exponential disc and a NFW dark matter halo. We then fit these curves assuming parameterized halo models (either with an inner cusp or a core) and using the Newtonian potential to estimate the theoretical rotation curve. Such a study allows us to investigate the bias on the disc and halo model parameters induced by the systematic error induced by fo...

  17. Learning effect on survey data: high leverage and estimation bias

    OpenAIRE

    Mazbahul Golam Ahamad

    2010-01-01

    Method of survey data collection, especially at household or personal interview, responders frequently answers extreme, because of their pre-assumption on questionnaire to get financial or food aid. This reduces data consistency and advances leverage that affects estimation procedure and estimated predictors. This study analysis the impact of learning effect on research hypothesis using mailed interview of different cluster of researcher such as expertise, mid-level research assistants and en...

  18. Bias in estimating food consumption of fish from stomach-content analysis

    DEFF Research Database (Denmark)

    Rindorf, Anna; Lewy, Peter

    2004-01-01

    This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average...... contents derived from pooled stomach samples rather than individual stomachs, and (3) the effect of ignoring biological factors that affect the evacuation of prey. Estimating consumption from only two stomach samples yielded results close to the actual intake rate in a simulation study. In contrast to this......, a serious positive bias was introduced by estimating food intake from the contents of pooled stomach samples. An expression is given that can be used to correct analytically for this bias. A new method, which takes into account the distribution and evacuation of individual prey types as well as the...

  19. Ocean wave parameters estimation using backpropagation neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; SubbaRao; Raju, D.H.

    In the present study, various ocean wave parameters are estimated from theoretical Pierson-Moskowitz spectra as well as measured ocean wave spectra using back propagation neural networks (BNN). Ocean wave parameters estimation by BNN shows...

  20. Maximum likelihood estimation of the negative binomial dispersion parameter for highly overdispersed data, with applications to infectious diseases.

    Directory of Open Access Journals (Sweden)

    James O Lloyd-Smith

    Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.

  1. Estimating demographic parameters using a combination of known-fate and open N-mixture models

    Science.gov (United States)

    Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.

    2015-01-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.

  2. System and method for motor parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  3. The influence of geomagnetic storms on the estimation of GPS instrumental biases

    Directory of Open Access Journals (Sweden)

    W. Zhang

    2009-04-01

    Full Text Available An algorithm has been developed to derive the ionospheric total electron content (TEC and to estimate the resulting instrumental biases in Global Positioning System (GPS data from measurements made with a single receiver. The algorithm assumes that the TEC is identical at any point within a mesh and that the GPS instrumental biases do not vary within a day. We present some results obtained using the algorithm and a study of the characteristics of the instrumental biases during active geomagnetic periods. The deviations of the TEC during an ionospheric storm (induced by a geomagnetic storm, compared to the quiet ionosphere, typically result in severe fluctuations in the derived GPS instrumental biases. Based on the analysis of three ionospheric storm events, we conclude that different kinds of ionospheric storms have differing influences on the measured biases of GPS satellites and receivers. We find that the duration of severe ionospheric storms is the critical factor that adversely impacts the estimation of GPS instrumental biases. Large deviations in the TEC can produce inaccuracies in the estimation of GPS instrumental biases for the satellites that pass over the receiver during that period. We also present a semi quantitative analysis of the duration of the influence of the storm.

  4. Health indicators: eliminating bias from convenience sampling estimators.

    Science.gov (United States)

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. PMID:21290401

  5. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  6. Parameter estimation with Sandage-Loeb test

    International Nuclear Information System (INIS)

    The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w0waCDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ωm and H0 in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ωm and H0 by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w0 and wa by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ωm, H0, and wa by more than 70%, 20%, and 60%, respectively, for the w0waCDM model

  7. Parameter estimation with Sandage-Loeb test

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin, E-mail: gengjiajia163@163.com, E-mail: jfzhang@mail.neu.edu.cn, E-mail: zhangxin@mail.neu.edu.cn [Department of Physics, College of Sciences, Northeastern University, Shenyang 110004 (China)

    2014-12-01

    The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.

  8. Estimation of high altitude Martian dust parameters

    Science.gov (United States)

    Pabari, Jayesh; Bhalodi, Pinali

    2016-07-01

    Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.

  9. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    International Nuclear Information System (INIS)

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation. (paper)

  10. Attitude and gyro bias estimation by the rotation of an inertial measurement unit

    Science.gov (United States)

    Wu, Zheming; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang

    2015-12-01

    In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation.

  11. Effect of percent non-detects on estimation bias in censored distributions

    Science.gov (United States)

    Zhang, Z.; Lennox, W. C.; Panu, U. S.

    2004-09-01

    Uniqueness of the problem surrounding non-detects has been a concern alike to researchers and statisticians dealing with summary statistics while analyzing censored data. To incorporate non-detects in the estimation process, a simple substitution by the MDL (method detection limit) and the maximum likelihood estimation method are routinely implemented as standard methods by US-EPA laboratories. In situations where numerical standards are set at or near the MDL by regulatory agencies, it is prudent and important to closely investigate both the variability in test measurements and the estimation bias, because an inference based on biased estimates could entail significant liabilities. Variability is understood to be not only inevitable but also an inherent and integral part of any chemical analysis or test. In situations where regulatory agencies fail to account for the inherently present variability of test measurements, there is a need for regulated facilities to seek remedial action merely as a consequence of inadequate statistical procedure. This paper utilizes a mathematical approach to derive the bias functions and resulting bias curves are developed to investigate the censored samples from a variety of probability distributions such as normal, log-normal, gamma, and Gumbel distributions. Finally, the bias functions and bias curves are also compared to the results obtained by using Monte Carlo simulations.

  12. Understanding the physics driving the values of Lyman-alpha forest bias parameters

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anze

    2016-01-01

    With the advancement of Lyman-alpha forest power spectrum measurements to larger scales and to greater precision, it is crucial that we also improve our understanding of the bias between the measured flux and the underlying matter power spectrum, especially for future percent level cosmology constraints. In order to develop an intuition for the physics driving the values of the density and velocity bias parameters of the Lyman-alpha forest, we have run a series of hydrodynamic SPH simulations to test existing approximations found in the literature. Through a series of progressively more realistic scenarios, we first introduce flux based on the Fluctuating Gunn Peterson Approximation, just using the density fields, then introduce redshift space distortions, as well as thermal broadening, and finally, analyzing the full hydrodynamic part of the simulations. We find surprising agreement between the analytical approximations developed by Seljak (2012) and the numerical methods in the limit of linear redshift space-distortions and no thermal broadening. Specifically, we find that the prediction of the analytical velocity bias expression is exact in the limit of no thermal broadening, and speculate that the measurement of this bias along with a small-scale measurement of the flux PDF, could yield a possible probe of the thermal state of the IGM. A deeper understanding of the large-scale Lyman-alpha biasing will also help us in using the large-scale clustering of the forest as a cosmological probe beyond baryon acoustic oscillations.

  13. Impact of Road Vehicle Accelerations on SAR-GMTI Motion Parameter Estimation

    OpenAIRE

    Baumgartner, Stefan; Gabele, Martina; Krieger, Gerhard; Bethke, Karl-Heinz; Zuev, Sergey

    2006-01-01

    In recent years many powerful techniques and algorithms have been developed to detect moving targets and estimate their motion parameters from single- or multi-channel SAR data. In case of single- and two-channel systems, most of the developed algorithms rely on analysis of the Doppler history. Nowadays it is known, that even small unconsidered across-track accelerations can bias the along-track velocity estimation. Since we want to monitor real and more complex traffic scenarios with a f...

  14. Effects of network-average magnitude bias on yield estimates for underground nuclear explosions

    International Nuclear Information System (INIS)

    The ISC body-wave magnitude, msub(b ISC), of presumed underground nuclear explosions in Kazakhstan, USSR, is shown to be systematically biased, by comparison to that recorded at the array station EKA (msub(b EKA)). This is found to be due in part to anelastic attenuation effects on msub(b EKA), but several characteristics of the ISC data demonstrate that the bias is also due to network-averaging effects. For the smaller explosions, those stations with a positive msub(b) bias dominate the data set, but the remainder of the network fails to detect the event. Conversely, for larger explosions, additional stations, with negative msub(b) bias will detect. Use of published station corrections for EKA allows estimation of an msub(b EKA): Y relationship and hence, a magnitude: yield relationship which takes account of network-average bias. (author)

  15. Wage Premia in Employment Clusters: Does Worker Sorting Bias Estimates?

    OpenAIRE

    Shihe Fu; Stephen L. Ross

    2007-01-01

    This paper tests whether the correlation between wages and the spatial concentration of employment can be explained by unobserved worker productivity differences. Residential location is used as a proxy for a worker's unobserved productivity, and average workplace commute time is used to test whether location based productivity differences are compensated away by longer commutes. Analyses using confidential data from the 2000 Decennial Census Long Form find that the agglomeration estimates ar...

  16. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    Science.gov (United States)

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  17. Minimizing Intra-Campaign Biases in Airborne Laser Altimetry By Thorough Calibration of Lidar System Parameters

    Science.gov (United States)

    Sonntag, J. G.; Chibisov, A.; Krabill, K. A.; Linkswiler, M. A.; Swenson, C.; Yungel, J.

    2015-12-01

    Present-day airborne lidar surveys of polar ice, NASA's Operation IceBridge foremost among them, cover large geographical areas. They are often compared with previous surveys over the same flight lines to yield mass balance estimates. Systematic biases in the lidar system, especially those which vary from campaign to campaign, can introduce significant error into these mass balance estimates and must be minimized before the data is released by the instrument team to the larger scientific community. NASA's Airborne Topographic Mapper (ATM) team designed a thorough and novel approach in order to minimize these biases, and here we describe two major aspects of this approach. First, we conduct regular ground vehicle-based surveys of lidar calibration targets, and overfly these targets on a near-daily basis during field campaigns. We discuss our technique for conducting these surveys, in particular the measures we take specifically to minimize systematic height biases in the surveys, since these can in turn bias entire campaigns of lidar data and the mass balance estimates based on them. Second, we calibrate our GPS antennas specifically for each instrument installation in a remote-sensing aircraft. We do this because we recognize that the metallic fuselage of the aircraft can alter the electromagnetic properties of the GPS antenna mounted to it, potentially displacing its phase center by several centimeters and biasing lidar results accordingly. We describe our technique for measuring the phase centers of a GPS antenna installed atop an aircraft, and show results which demonstrate that different installations can indeed alter the phase centers significantly.

  18. Estimates of Armington parameters for a landlocked economy

    OpenAIRE

    Nganou, Jean-Pascal

    2005-01-01

    One of the most debated issues in the Computable General Equilibrium (CGE) literature concerns the validity of the key behavioral parameters used in the calibration process. CGE modelers seldom estimate those parameters, preferring to borrow from the handful of estimates available in the literature. The lack of data is often cited as a reason for this type of modus operandi (technique). Estimating key parameters is very crucial since CGE results are quite sensitive to parameter specification....

  19. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    Science.gov (United States)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  20. Estimating Population Parameters using the Structured Serial Coalescent with Bayesian MCMC Inference when some Demes are Hidden

    Directory of Open Access Journals (Sweden)

    Allen Rodrigo

    2006-01-01

    Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.

  1. THEORETICAL ANALYSIS AND PRACTICE ON THE SELECTION OF KEY PARAMETERS FOR HORIZONTAL BIAS BURNER

    Institute of Scientific and Technical Information of China (English)

    刘泰生; 许晋源

    2003-01-01

    The air flow ratio and the pulverized-coal mass flux ratio between the rich and lean sides are the key parameters of horizontal bias burner. In order to realize high combustion efficiency, excellent stability of ignition, low NOx emission and safe operation, six principal demands are presented on the selection of key parameters. An analytical model is established on the basis of the demands, the fundamentals of combustion and the operation results. An improved horizontal bias burner is also presented and applied. The experiment and numerical simulation results show the improved horizontal bias burner can realize proper key parameters, lower NOx emission, high combustion efficiency and excellent performance of part load operation without oil support. It also can reduce the circumfluence and low velocity zone existing at the downstream sections of vanes, and avoid the burnout of the lean primary-air nozzle and the jam in the lean primary-air channel. The operation and test results verify the reasonableness and feasibility of the analytical model.

  2. Closed-form kinetic parameter estimation solution to the truncated data problem

    International Nuclear Information System (INIS)

    In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.

  3. Parameter estimation for estimation of bottom hole pressure during drilling.

    OpenAIRE

    Vea, Hans Kristian

    2009-01-01

    In this thesis we examine four bottom hole pressure estimators based on adaptive estimation of the friction pressure for the drill string and the annulus. Knowledge about the bottom hole pressure is crucial to achieve security and commercial objectives. Bottom hole pressure measurements transmitted by mud pulse telemetry have limited bandwidth and it is common to use additional models to estimate the bottom hole pressure when measurements are unavailable. The motivation for an adaptive approa...

  4. On drift parameter estimation in models with fractional Brownian motion

    CERN Document Server

    Kozachenko, Yuriy; Mishura, Yuliya

    2011-01-01

    We consider a stochastic differential equation involving standard and fractional Brownian motion with unknown drift parameter to be estimated. We investigate the standard maximum likelihood estimate of the drift parameter, two non-standard estimates and three estimates for the sequential estimation. Model strong consistency and some other properties are proved. The linear model and Ornstein-Uhlenbeck model are studied in detail. As an auxiliary result, an asymptotic behavior of the fractional derivative of the fractional Brownian motion is established.

  5. Parameter estimation in dynamic Casimir force measurements with known periodicity

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Song, E-mail: cuis@imre.a-star.edu.sg [Institute of Materials Research and Engineering, 3 Research Link, Singapore 117602 (Singapore); Soh, Yeng Chai, E-mail: eycsoh@ntu.edu.sg [School of Electrical and Electronics Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 (Singapore)

    2011-12-05

    It is important to have an accurate estimate of the unknown parameters such as the separation distance between interacting materials in Casimir force measurements. Current methods tend to produce large estimation errors. In this Letter, we present a novel method based on an adaptive control approach to estimate the unknown parameters using large amplitude dynamic Casimir measurements at separation distances of below 1 μm where both electrostatic force and Casimir force are significant. The estimate is proved to be accurate and the effectiveness of our method is demonstrated via a numerical example. -- Highlights: ► Unknown parameters like separation gap are nonlinearly parameterized in Casimir force measurements ► A two-stage parameter estimation method is proposed to estimate unknown parameters accurately. ► Our method is proved to be effective by theoretical derivation and simulations. ► Our method can be applied to a broad range of nonlinear parameter estimation problems.

  6. Adaptive on-line estimation and control of overlay tool bias

    Science.gov (United States)

    Martinez, Victor M.; Finn, Karen; Edgar, Thomas F.

    2003-06-01

    Modern lithographic manufacturing processes rely on various types of exposure tools, used in a mix-and-match fashion. The motivation to use older tools alongside state-of-the-art tools is lower cost and one of the tradeoffs is a degradation in overlay performance. While average prices of semiconductor products continue to fall, the cost of manufacturing equipment rises with every product generation. Lithography processing, including the cost of ownership for tools, accounts for roughly 30% of the wafer processing costs, thus the importance of mix-and-match strategies. Exponentially Weighted Moving Average (EWMA) run-by-run controllers are widely used in the semiconductor manufacturing industry. This type of controller has been implemented successfully in volume manufacturing, improving Cpk values dramatically in processes like photolithography and chemical mechanical planarization. This simple, but powerful control scheme is well suited for adding corrections to compensate for Overlay Tool Bias (OTB). We have developed an adaptive estimation technique to compensate for overlay variability due to differences in the processing tools. The OTB can be dynamically calculated for each tool, based on the most recent measurements available, and used to correct the control variables. One approach to tracking the effect of different tools is adaptive modeling and control. The basic premise of an adaptive system is to change or adapt the controller as the operating conditions of the system change. Using closed-loop data, the adaptive control algorithm estimates the controller parameters using a recursive estimation technique. Once an updated model of the system is available, modelbased control becomes feasible. In the simplest scenario, the control law can be reformulated to include the current state of the tool (or its estimate) to compensate dynamically for OTB. We have performed simulation studies to predict the impact of deploying this strategy in production. The results

  7. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens;

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...

  8. An observation on the bias in clinic-based estimates of malnutrition rates

    OpenAIRE

    Margaret E. Grosh; Fox, Kristin; Jackson, Maria

    1991-01-01

    Clinic-based data on malnutrition are the most readily available for following malnutrition levels and trends in most countries, but there is a bias inherent in clinic-based estimates of malnutrition rates. The authors compare annual clinic-based malnutrition data and those from four household surveys in Jamaica. The clinic data give lower estimates of malnutrition than the survey data in all four cases - significantly so in three. The size of the bias was variable over time, so the clinic da...

  9. On the shear estimation bias induced by the spatial variation of colour across galaxy profiles

    CERN Document Server

    Semboloni, Elisabetta; Huang, Zhuoyi; Cardone, Vincenzo; Cropper, Mark; Joachimi, Benjamin; Kitching, Thomas; Kuijken, Konrad; Lombardi, Marco; Maoli, Roberto; Mellier, Yannick; Miller, Lance; Rhodes, Jason; Scaramella, Roberto; Schrabback, Tim; Velander, Malin

    2012-01-01

    The spatial variation of the colour of a galaxy may introduce a bias in the measurement of its shape if the PSF profile depends on wavelength. We study how this bias depends on the properties of the PSF and the galaxies themselves. The bias depends on the scales used to estimate the shape, which may be used to optimise methods to reduce the bias. Here we develop a general approach to quantify the bias. Although applicable to any weak lensing survey, we focus on the implications for the ESA Euclid mission. Based on our study of synthetic galaxies we find that the bias is a few times 10^-3 for a typical galaxy observed by Euclid. Consequently, it cannot be neglected and needs to be accounted for. We demonstrate how one can do so using spatially resolved observations of galaxies in two filters. We show that HST observations in the F606W and F814W filters allow us to model and reduce the bias by an order of magnitude, sufficient to meet Euclid's scientific requirements. The precision of the correction is ultimate...

  10. METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    刘宇红; 曾衍钧; 许景锋; 张梅

    2004-01-01

    Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.

  11. Estimation of motility parameters from trajectory data

    DEFF Research Database (Denmark)

    Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;

    2015-01-01

    Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little...... which similar results may be obtained also for self-propelled particles....

  12. M-Testing Using Finite and Infinite Dimensional Parameter Estimators

    OpenAIRE

    White, Halbert; Hong, Yongmiao

    1999-01-01

    The m-testing approach provides a general and convenient framework in which to view and construct specification tests for econometric models. Previous m-testing frameworks only consider test statistics that involve finite dimensional parameter estimators and infinite dimensional parameter estimators affecting the limit distribution of the m-test statistics. In this paper we propose a new m-testing framework using both finite and infinite dimensional parameter estimators, where the latter may ...

  13. On-line parameter estimation of a magnetic bearing

    OpenAIRE

    Delpoux, Romain; Floquet, Thierry

    2011-01-01

    This article presents a parameter estimation algorithm for a magnetic bearing. Such process have strongly nonlinear dynamics andare inherently unstable systems. A simplified model of the magnetic bearing is developed in order to be able to estimate certain parameters. These parameters are difficult to measure, and may slightly vary over time. The expression of the estimates is written as a function of integrals of the inputs and outputs of the system. The experiments show a fast and robust on...

  14. Towards physics responsible for large-scale Lyman-$\\alpha$ forest bias parameters

    CERN Document Server

    Cieplak, Agnieszka M

    2015-01-01

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density ($b_\\delta$) and velocity gradient ($b_\\eta$) biases of the Lyman-$\\alpha$ forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamicaly simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermal broadening and linear redshift-space distortions. We also show that his $b_\\eta$ formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of $b_\\eta$ ...

  15. Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro

    Directory of Open Access Journals (Sweden)

    Guohu Feng

    2013-02-01

    Full Text Available An actual account of the angle random walk (ARW coefficients of gyros in the constant rate biased rate ring laser gyro (RLG inertial navigation system (INS is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS.

  16. Cosmological parameter extraction and biases from type Ia supernova magnitude evolution

    Science.gov (United States)

    Linden, S.; Virey, J.-M.; Tilquin, A.

    2009-11-01

    We study different one-parametric models of type Ia supernova magnitude evolution on cosmic time scales. Constraints on cosmological and supernova evolution parameters are obtained by combined fits on the actual data coming from supernovae, the cosmic microwave background, and baryonic acoustic oscillations. We find that the best-fit values imply supernova magnitude evolution such that high-redshift supernovae appear some percent brighter than would be expected in a standard cosmos with a dark energy component. However, the errors on the evolution parameters are of the same order, and data are consistent with nonevolving magnitudes at the 1σ level, except for special cases. We simulate a future data scenario where SN magnitude evolution is allowed for, and neglect the possibility of such an evolution in the fit. We find the fiducial models for which the wrong model assumption of nonevolving SN magnitude is not detectable, and for which biases on the fitted cosmological parameters are introduced at the same time. Of the cosmological parameters, the overall mass density ΩM has the strongest chances to be biased due to the wrong model assumption. Whereas early-epoch models with a magnitude offset Δ m˜ z2 show up to be not too dangerous when neglected in the fitting procedure, late epoch models with Δ m˜√{z} have high chances of undetectably biasing the fit results. Centre de Physique Théorique is UMR 6207 - “Unité Mixte de Recherche” of CNRS and of the Universities “de Provence”, “de la Mediterranée”, and “du Sud Toulon-Var” - Laboratory affiliated with FRUMAM (FR2291).

  17. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    M Sinha; P K Kalra; K Kumar

    2000-04-01

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.

  18. Estimation of temperature impact on gamma-induced degradation parameters of N-channel MOS transistor

    International Nuclear Information System (INIS)

    The physical parameters of MOS transistors can be impressed by ionizing radiation and that leads to circuit degradation and failure. These effects require analyzing the basic mechanism that results in the buildup of induced defect in radiation environments. The reliable estimation also needs to consider external factors, particularly temperature fluctuations. I–V characteristic of the device was obtained using a temperature-dependent adapted form of charge-sheet model under heating cycle during irradiation with several ionizing dose levels at different gate biases. In this work, the analytical calculation for estimating the irradiation temperature impact on gamma-induced degradation parameters of N-channel MOS transistors at different gate biases was investigated. The experimental measurement was done in order to verify and parameterize the analytical model calculations. The results indicated that inserting irradiation temperature in the calculations caused a significant variation in radiation-induced MOS transistor parameters such as threshold voltage shift and off-state leakage current. According to the results, these variations were about 10.1% and 23.4% for voltage shifts and leakage currents respectively during investigated heating cycle for total dose of 20 krad at 9 V gate bias. - Highlights: • Reliable radiation effect estimations require considering external factors. • Irradiation temperature impact on degradation parameters of N-MOS was investigated. • An analytical model was utilized based on time dependent buildup of defect charges. • Oxide and interface trapped charges varied with irradiation temperature

  19. Muscle parameters estimation based on biplanar radiography.

    Science.gov (United States)

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography. PMID:27082150

  20. Estimation of bias errors in angle-of-arrival measurements using platform motion

    Science.gov (United States)

    Grindlay, A.

    1981-08-01

    An algorithm has been developed to estimate the bias errors in angle-of-arrival measurements made by electromagnetic detection devices on-board a pitching and rolling platform. The algorithm assumes that continuous exact measurements of the platform's roll and pitch conditions are available. When the roll and pitch conditions are used to transform deck-plane angular measurements of a nearly fixed target's position to a stabilized coordinate system, the resulting stabilized coordinates (azimuth and elevation) should not vary with changes in the roll and pitch conditions. If changes do occur they are a result of bias errors in the measurement system and the algorithm which has been developed uses these changes to estimate the sense and magnitude of angular bias errors.

  1. Estimating non-response bias in a survey on alcohol consumption: comparison of response waves

    NARCIS (Netherlands)

    V.M. Lahaut; H.A.M. Jansen (Harrie); H. van de Mheen (Dike); H.F.L. Garretsen (Henk); J.E. Verdurmen; A. van Dijk (Bram)

    2003-01-01

    textabstractAIMS: According to 'the continuum of resistance model' late respondents can be used as a proxy for non-respondents in estimating non-response bias. In the present study, the validity of this model was explored and tested in three surveys on alcohol consumption. METHODS:

  2. A robust approach for space based sensor bias estimation in the presence of data association uncertainty

    Science.gov (United States)

    Belfadel, Djedjiga; Osborne, Richard; Bar-Shalom, Yaakov

    2015-06-01

    In this paper, an approach to bias estimation in the presence of measurement association uncertainty using common targets of opportunity, is developed. Data association is carried out before the estimation of sensor angle measurement biases. Consequently, the quality of data association is critical to the overall tracking performance. Data association becomes especially challenging if the sensors are passive. Mathematically, the problem can be formulated as a multidimensional optimization problem, where the objective is to maximize the generalized likelihood that the associated measurements correspond to common targets, based on target locations and sensor bias estimates. Applying gating techniques significantly reduces the size of this problem. The association likelihoods are evaluated using an exhaustive search after which an acceptance test is applied to each solution in order to obtain the optimal (correct) solution. We demonstrate the merits of this approach by applying it to a simulated tracking system, which consists of two satellites tracking a ballistic target. We assume the sensors are synchronized, their locations are known, and we estimate their orientation biases together with the unknown target locations.

  3. The effect of beam intensity on the estimation bias of beam position

    International Nuclear Information System (INIS)

    For the signals of the beam position monitor (BPM), the signal-to-noise ratio (SNR) is directly related to the beam intensity. Low beam intensity results in poor SNR. The random noise has a modulation effect on both the amplitude and phase of the BPM signals. Therefore, the beam position measurement has a certain random error. In the currently used BPM, time-averaging and waveform clipping are used to improve the measurement. The nonlinear signal processing results in a biased estimate of beam position. A statistical analysis was made to examine the effect of the SNR, which is determined by the beam intensity, on the estimation bias. The results of the analysis suggest that the estimation bias has a dependence not only on the beam position but also on beam intensity. Specifically, the dependence gets strong as the beam intensity decreases. This property has set a lower limit of the beam intensity range which the BPM's can handle. When the beam intensity is below that limit the estimation bias starts to vary dramatically, resulting in the BPMs failure. According to the analysis, the lowest beam intensity is that at which the SNR of the generated BPM signal is about 15 dB. The limit for NSEP BPM, for instance, is about 6Ell. The analysis may provide the BPM designers with some idea about the potential of the current BPM'S

  4. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  5. Cosmological Parameter Extraction and Biases from Type Ia Supernova Magnitude Evolution

    CERN Document Server

    Linden, Sebastian; Tilquin, Andre

    2009-01-01

    We study different one-parametric models of type Ia Supernova magnitude evolution on cosmic time scales. Constraints on cosmological and Supernova evolution parameters are obtained by combined fits on the actual data coming from Supernovae, the cosmic microwave background, and baryonic acoustic oscillations. We find that data prefer a magnitude evolution such that high-redshift Supernova are brighter than would be expected in a standard cosmos with a dark energy component. Data however are consistent with non-evolving magnitudes at the one-sigma level, except special cases. We simulate a future data scenario where SN magnitude evolution is allowed for, and neglect the possibility of such an evolution in the fit. We find the fiducial models for which the wrong model assumption of non-evolving SN magnitude is not detectable, and for which at the same time biases on the fitted cosmological parameters are introduced. Of the cosmological parameters the overall mass density has the strongest chances to be biased du...

  6. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  7. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  8. Control and Estimation of Distributed Parameter Systems

    CERN Document Server

    Kappel, F; Kunisch, K

    1998-01-01

    Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.

  9. Towards physics responsible for large-scale Lyman-α forest bias parameters

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2016-03-01

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (bδ) and velocity gradient (bη) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit of no thermal broadening and linear redshift-space distortions. We also show that his bη formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of bη and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. We find that large-scale biases derived from the smoothed total matter field are within 10-20% to those based on hydrodynamical quantities, in line with other measurements in the literature.

  10. FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION

    International Nuclear Information System (INIS)

    Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rmsz = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored ΛCDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.

  11. Parameter Estimation of the T-Book

    International Nuclear Information System (INIS)

    This paper summarizes the statistical assumptions and methods that have been used in the work on the T-book, a reliability data handbook which is used in safety analyses of nuclear power plants in Sweden and in the Swedish design plants in Finland. The author discusses the conceptual framework for the description and handling of uncertainty. He briefly outlines the two-stage 'Bayes empirical Bayes' method. To express the inherent tail-uncertainty in the distribution of failure rate, a class of contaminated distributions with three (hyper) parameters is proposed. Attention is focused on the properties of this T-book approach with regard to how it can be used to describe the parametric uncertainties, how uncertainty distributions can be used for predictive purposes, and how distributions can be updated

  12. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    Science.gov (United States)

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  13. Rapid gravitational wave parameter estimation with a single spin: Systematic uncertainties in parameter estimation with the SpinTaylorF2 approximation

    Science.gov (United States)

    Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.

    2015-08-01

    Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.

  14. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  15. A neural network applied to estimate Burr XII distribution parameters

    International Nuclear Information System (INIS)

    The Burr XII distribution can closely approximate many other well-known probability density functions such as the normal, gamma, lognormal, exponential distributions as well as Pearson type I, II, V, VII, IX, X, XII families of distributions. Considering a wide range of shape and scale parameters of the Burr XII distribution, it can have an important role in reliability modeling, risk analysis and process capability estimation. However, estimating parameters of the Burr XII distribution can be a complicated task and the use of conventional methods such as maximum likelihood estimation (MLE) and moment method (MM) is not straightforward. Some tables to estimate Burr XII parameters have been provided by Burr (1942) but they are not adequate for many purposes or data sets. Burr tables contain specific values of skewness and kurtosis and their corresponding Burr XII parameters. Using interpolation or extrapolation to estimate other values may provide inappropriate estimations. In this paper, we present a neural network to estimate Burr XII parameters for different values of skewness and kurtosis as inputs. A trained network is presented, and one can use it without previous knowledge about neural networks to estimate Burr XII distribution parameters. Accurate estimation of the Burr parameters is an extension of simulation studies.

  16. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...... to estimating a CGE model of Mozambique....... Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...

  17. Multiplicative intrinsic component optimization (MICO) for MRI bias field estimation and tissue segmentation.

    Science.gov (United States)

    Li, Chunming; Gore, John C; Davatzikos, Christos

    2014-09-01

    This paper proposes a new energy minimization method called multiplicative intrinsic component optimization (MICO) for joint bias field estimation and segmentation of magnetic resonance (MR) images. The proposed method takes full advantage of the decomposition of MR images into two multiplicative components, namely, the true image that characterizes a physical property of the tissues and the bias field that accounts for the intensity inhomogeneity, and their respective spatial properties. Bias field estimation and tissue segmentation are simultaneously achieved by an energy minimization process aimed to optimize the estimates of the two multiplicative components of an MR image. The bias field is iteratively optimized by using efficient matrix computations, which are verified to be numerically stable by matrix analysis. More importantly, the energy in our formulation is convex in each of its variables, which leads to the robustness of the proposed energy minimization algorithm. The MICO formulation can be naturally extended to 3D/4D tissue segmentation with spatial/sptatiotemporal regularization. Quantitative evaluations and comparisons with some popular softwares have demonstrated superior performance of MICO in terms of robustness and accuracy. PMID:24928302

  18. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  19. Estimating parameters of chaotic systems under noise-induced synchronization

    International Nuclear Information System (INIS)

    Kim et al. introduced in 2002 [Kim CM, Rim S, Kye WH. Sequential synchronization of chaotic systems with an application to communication. Phys Rev Lett 2002;88:014103] a hierarchically structured communication scheme based on sequential synchronization, a modification of noise-induced synchronization (NIS). We propose in this paper an approach that can estimate the parameters of chaotic systems under NIS. In this approach, a dimensionally-expanded parameter estimating system is first constructed according to the original chaotic system. By feeding chaotic transmitted signal and external driving signal, the parameter estimating system can be synchronized with the original chaotic system. Consequently, parameters would be estimated. Numerical simulation shows that this approach can estimate all the parameters of chaotic systems under two feeding modes, which implies the potential weakness of the chaotic communication scheme under NIS or sequential synchronization.

  20. Estimating parameters of chaotic systems under noise-induced synchronization

    Energy Technology Data Exchange (ETDEWEB)

    Wu Xiaogang [Institute of PR and AI, Huazhong University of Science and Technology, Wuhan 430074 (China)], E-mail: seanwoo@mail.hust.edu.cn; Wang Zuxi [Institute of PR and AI, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2009-01-30

    Kim et al. introduced in 2002 [Kim CM, Rim S, Kye WH. Sequential synchronization of chaotic systems with an application to communication. Phys Rev Lett 2002;88:014103] a hierarchically structured communication scheme based on sequential synchronization, a modification of noise-induced synchronization (NIS). We propose in this paper an approach that can estimate the parameters of chaotic systems under NIS. In this approach, a dimensionally-expanded parameter estimating system is first constructed according to the original chaotic system. By feeding chaotic transmitted signal and external driving signal, the parameter estimating system can be synchronized with the original chaotic system. Consequently, parameters would be estimated. Numerical simulation shows that this approach can estimate all the parameters of chaotic systems under two feeding modes, which implies the potential weakness of the chaotic communication scheme under NIS or sequential synchronization.

  1. The effect of heart motion on parameter bias in dynamic cardiac SPECT

    International Nuclear Information System (INIS)

    Dynamic cardiac SPECT can be used to estimate kinetic rate parameters which describe the wash-in and wash-out of tracer activity between the blood and the myocardial tissue. These kinetic parameters can in turn be correlated to myocardial perfusion. There are, however, many physical aspects associated with dynamic SPECT which can introduce errors into the estimates. This paper describes a study which investigates the effect of heart motion on kinetic parameter estimates. Dynamic SPECT simulations are performed using a beating version of the MCAT phantom. The results demonstrate that cardiac motion has a significant effect on the blood, tissue, and background content of regions of interest. This in turn affects estimates of wash-in, while it has very little effect on estimates of wash-out. The effect of cardiac motion on parameter estimates appears not to be as great as effects introduced by photon noise and geometric collimator response. It is also shown that cardiac motion results in little extravascular contamination of the left ventricle blood region of interest

  2. Accurate Parameter Estimation for Unbalanced Three-Phase System

    OpenAIRE

    Yuan Chen; Hing Cheung So

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...

  3. Efficient Estimation of Nonlinear Finite Population Parameters Using Nonparametrics

    OpenAIRE

    Goga, Camelia; Ruiz-Gazen, Anne

    2012-01-01

    Currently, the high-precision estimation of nonlinear parameters such as Gini indices, low-income proportions or other measures of inequality is particularly crucial. In the present paper, we propose a general class of estimators for such parameters that take into account univariate auxiliary information assumed to be known for every unit in the population. Through a nonparametric model-assisted approach, we construct a unique system of survey weights that can be used to estimate any nonlinea...

  4. Quantum estimation of coupled parameters and the role of entanglement

    OpenAIRE

    Kok, Pieter; Dunningham, Jacob; Ralph, Jason F.

    2015-01-01

    The quantum Cramer-Rao bound places a limit on the mean square error of a parameter estimation procedure, and its numerical value is determined by the quantum Fisher information. For single parameters, this leads to the well- known Heisenberg limit that surpasses the classical shot-noise limit. When estimating multiple parameters, the situation is more complicated and the quantum Cramer-Rao bound is generally not attainable. In such cases, the use of entanglement typically still offers an enh...

  5. Parameter Estimation of Photovoltaic Models via Cuckoo Search

    OpenAIRE

    Jieming Ma; Ting, T. O.; Ka Lok Man; Nan Zhang; Sheng-Uei Guan; Wong, Prudence W. H.

    2013-01-01

    Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV) models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS) is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. S...

  6. MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

    Directory of Open Access Journals (Sweden)

    Sugaya Yuki

    2012-08-01

    Full Text Available Abstract Background Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele. Findings Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package. Conclusions Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is

  7. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  8. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  9. Bias-corrected Pearson estimating functions for Taylor?s power law applied to benthic macrofauna data

    OpenAIRE

    Jørgensen, Bent; Clarice G.B. Demétrio; Kristensen, Erik; Banta, Gary T; Petersen, Hans Christian; Delefosse, Matthieu

    2011-01-01

    Abstract Estimation of Taylor?s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm...

  10. Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments

    CERN Document Server

    Fendick, Kerry

    2011-01-01

    This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...

  11. Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters

    Science.gov (United States)

    Shi, L.

    2015-12-01

    This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.

  12. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    Science.gov (United States)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  13. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    Science.gov (United States)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  14. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  15. Simultaneous Estimation of Model State Variables and Observation and Forecast Biases Using a Two-Stage Hybrid Kalman Filter

    Science.gov (United States)

    Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.

    2013-01-01

    In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  16. Simultaneous estimation of model state variables and observation and forecast biases using a two-stage hybrid Kalman filter

    Directory of Open Access Journals (Sweden)

    V. R. N. Pauwels

    2013-04-01

    Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the Discrete Kalman Filter, and the state variables using the Ensemble Kalman Filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware Ensemble Kalman Filter. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  17. Simultaneous estimation of model state variables and observation and forecast biases using a two-stage hybrid Kalman filter

    Directory of Open Access Journals (Sweden)

    V. R. N. Pauwels

    2013-09-01

    Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.

  18. Evaluation of biases for inserted reactivity estimation of JCO criticality accident

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Toshihiro; Nakamura, Takemi; Miyoshi, Yoshinori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-02-01

    Biases in criticality calculation methods used in JCO criticality accident analyses were estimated to make accurate predictions of an inserted reactivity in the accident. MCNP 4B and pointwise cross section libraries based on JENDL-3.1, JENDL-3.2 and ENDF/B-VI were used for the criticality calculations. With these calculation methods, neutron effective multiplication factors were obtained for STACY critical experiments, which used 10 wt.% enriched aqueous uranium solutions, and for critical experiments performed at the Rocky Flats Plant, which used 93.2 wt.% enriched aqueous uranium solutions. As a result, biases in keff's for 18.8 wt.% enriched uranium solution of the JCO accident were estimated to be 0.0%, +1.2%, and 0.1% when using JENDL-3.1, JENDL-3.2 and ENDF/B-VI, respectively. (author)

  19. Approaches to radar reflectivity bias correction to improve rainfall estimation in Korea

    Science.gov (United States)

    You, Cheol-Hwan; Kang, Mi-Young; Lee, Dong-In; Lee, Jung-Tae

    2016-05-01

    Three methods for determining the reflectivity bias of single polarization radar using dual polarization radar reflectivity and disdrometer data (i.e., the equidistance line, overlapping area, and disdrometer methods) are proposed and evaluated for two low-pressure rainfall events that occurred over the Korean Peninsula on 25 August 2014 and 8 September 2012. Single polarization radar reflectivity was underestimated by more than 12 and 7 dB in the two rain events, respectively. All methods improved the accuracy of rainfall estimation, except for one case where drop size distributions were not observed, as the precipitation system did not pass through the disdrometer location. The use of these bias correction methods reduced the RMSE by as much as 50 %. Overall, the most accurate rainfall estimates were obtained using the overlapping area method to correct radar reflectivity.

  20. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The sea state parameters can be estimated by Bayesian Modelling which uses complex-valued frequency response functions (FRF) to estimate the wave spectrum on the basis of...... measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence on the...

  1. Parameter estimation during a transient - application to BWR stability

    Energy Technology Data Exchange (ETDEWEB)

    Tambouratzis, T. [Institute of Nuclear Technology - Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Athens 153 10 (Greece)]. E-mail: tatiana@ipta.demokritos.gr; Antonopoulos-Domis, M. [Institute of Nuclear Technology - Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Athens 153 10 (Greece)

    2004-12-01

    The estimation of system parameters is of obvious practical interest. During transient operation, these parameters are expected to change, whereby the system is rendered time-varying and classical signal processing techniques are not applicable. A novel methodology is proposed here, which combines wavelet multi-resolution analysis and selective wavelet coefficient removal with classical signal processing techniques in order to provide short-term estimates of the system parameters of interest. The use of highly overlapping time-windows further monitors the gradual changes in system parameter values. The potential of the proposed methodology is demonstrated with numerical experiments for the problem of stability evaluation of boiling water reactors during a transient.

  2. Parameter estimation during a transient - application to BWR stability

    International Nuclear Information System (INIS)

    The estimation of system parameters is of obvious practical interest. During transient operation, these parameters are expected to change, whereby the system is rendered time-varying and classical signal processing techniques are not applicable. A novel methodology is proposed here, which combines wavelet multi-resolution analysis and selective wavelet coefficient removal with classical signal processing techniques in order to provide short-term estimates of the system parameters of interest. The use of highly overlapping time-windows further monitors the gradual changes in system parameter values. The potential of the proposed methodology is demonstrated with numerical experiments for the problem of stability evaluation of boiling water reactors during a transient

  3. Reducing the bias of estimates of genotype by environment interactions in random regression sire models

    OpenAIRE

    Meuwissen Theo HE; Ødegård Jørgen; Lillehammer Marie

    2009-01-01

    Abstract The combination of a sire model and a random regression term describing genotype by environment interactions may lead to biased estimates of genetic variance components because of heterogeneous residual variance. In order to test different models, simulated data with genotype by environment interactions, and dairy cattle data assumed to contain such interactions, were analyzed. Two animal models were compared to four sire models. Models differed in their ability to handle heterogeneo...

  4. Estimating the 3D Pore Size Distribution of Biopolymer Networks from Directionally Biased Data

    OpenAIRE

    Lang, Nadine R.; Münster, Stefan; Metzner, Claus; Krauss, Patrick; Schürmann, Sebastian; Lange, Janina; Aifantis, Katerina E.; Friedrich, Oliver; Fabry, Ben

    2013-01-01

    The pore size of biopolymer networks governs their mechanical properties and strongly impacts the behavior of embedded cells. Confocal reflection microscopy and second harmonic generation microscopy are widely used to image biopolymer networks; however, both techniques fail to resolve vertically oriented fibers. Here, we describe how such directionally biased data can be used to estimate the network pore size. We first determine the distribution of distances from random points in the fluid ph...

  5. How cognitive biases can distort environmental statistics: introducing the rough estimation task.

    Science.gov (United States)

    Wilcockson, Thomas D W; Pothos, Emmanuel M

    2016-04-01

    The purpose of this study was to develop a novel behavioural method to explore cognitive biases. The task, called the Rough Estimation Task, simply involves presenting participants with a list of words that can be in one of three categories: appetitive words (e.g. alcohol, food, etc.), neutral related words (e.g. musical instruments) and neutral unrelated words. Participants read the words and are then asked to state estimates for the percentage of words in each category. Individual differences in the propensity to overestimate the proportion of appetitive stimuli (alcohol-related or food-related words) in a word list were associated with behavioural measures (i.e. alcohol consumption, hazardous drinking, BMI, external eating and restrained eating, respectively), thereby providing evidence for the validity of the task. The task was also found to be associated with an eye-tracking attentional bias measure. The Rough Estimation Task is motivated in relation to intuitions with regard to both the behaviour of interest and the theory of cognitive biases in substance use. PMID:26866972

  6. An active contour model for the segmentation of images with intensity inhomogeneities and bias field estimation.

    Science.gov (United States)

    Huang, Chencheng; Zeng, Li

    2015-01-01

    Intensity inhomogeneity causes many difficulties in image segmentation and the understanding of magnetic resonance (MR) images. Bias correction is an important method for addressing the intensity inhomogeneity of MR images before quantitative analysis. In this paper, a modified model is developed for segmenting images with intensity inhomogeneity and estimating the bias field simultaneously. In the modified model, a clustering criterion energy function is defined by considering the difference between the measured image and estimated image in local region. By using this difference in local region, the modified method can obtain accurate segmentation results and an accurate estimation of the bias field. The energy function is incorporated into a level set formulation with a level set regularization term, and the energy minimization is conducted by a level set evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. The experimental results demonstrate the advantages of our model in terms of accuracy and insensitivity to the location of the initial contours. In particular, our method has been applied to various synthetic and real images with desirable results. PMID:25837416

  7. Thermophysical Property Estimation by Transient Experiments: The Effect of a Biased Initial Temperature Distribution

    Directory of Open Access Journals (Sweden)

    Federico Scarpa

    2015-01-01

    Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.

  8. Bias Estimations for Ill-posed Problem of Celestial Positioning Using the Sun and Precision Analysis

    Directory of Open Access Journals (Sweden)

    ZHAN Yinhu

    2016-08-01

    Full Text Available Lunar/Mars rovers own sun sensors for navigation, however, long-time tracking for the sun impacts on the real-time activity of navigation. Absolute positioning method by observing the sun with a super short tracking period such as 1 or 2 minutes is researched in this paper. Linear least squares model of altitude positioning method is deduced, and the ill-posed problem of celestial positioning using the sun is brought out for the first time. Singular value decomposition method is used to diagnose the ill-posed problem, and different bias estimations are employed and compared by simulative calculations. Results of the calculations indicate the superiority of bias estimations which can effectively improve initial values. However, bias estimations are greatly impacted by initial values, because the initial values converge at a line which passes by the real value and is vertical relative to the orientation of the sun. The research of this paper is of some value to application.

  9. Bias and robustness of uncertainty components estimates in transient climate projections

    Science.gov (United States)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias

  10. Another Look at the EWMA Control Chart with Estimated Parameters

    NARCIS (Netherlands)

    N.A. Saleh; M.A. Mahmoud; L.A. Jones-Farmer; I. Zwetsloot; W.H. Woodall

    2015-01-01

    The authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.

  11. Kalman filter data assimilation: Targeting observations and parameter estimation

    International Nuclear Information System (INIS)

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation

  12. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    been used as a third input. - the rotor speed and the driving torque of the induction motor have successfully been estimated based on measurements of the terminal quantities only. The following methods have been applied: a recursive prediction error method and a method based on a steady state model of...... robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete......: - identifiability has been treated in theory and practice in connection with parameter and signal estimation in induction motors. - a non recursive prediction error method has successfully been used to estimate physical related parameters in a continuous-time model of the induction motor. The speed of the rotor has...

  13. Kalman filter application for distributed parameter estimation in reactor systems

    International Nuclear Information System (INIS)

    An application of the Kalman filter has been developed for the real-time identification of a distributed parameter in a nuclear power plant. This technique can be used to improve numerical method-based best-estimate simulation of complex systems such as nuclear power plants. The application to a reactor system involves a unique modal model that approximates physical components, such as the reactor, as a coupled oscillator, i.e., a modal model with coupled modes. In this model both states and parameters are described by an orthogonal expansion. The Kalman filter with the sequential least-squares parameter estimation algorithm was used to estimate the modal coefficients of all states and one parameter. Results show that this state feedback algorithm is an effective way to parametrically identify a distributed parameter system in the presence of uncertainties

  14. Dynamic noise, chaos and parameter estimation in population biology

    OpenAIRE

    Stollenwerk, N.; Aguiar, M; Ballesteros, S.; Boto, J.; Kooi, B. W.; Mateus, L.

    2012-01-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models such as multi-strain dynamics to describe the virus–host interaction in dengue fever, even the most recently developed parameter estimation techniques, such as maximum likelihood iterated filtering, reach their computational limits. However, the fir...

  15. Estimation of Clarinet Reed Parameters by Inverse Modelling

    OpenAIRE

    CHATZIIOANNOU, Vasileios; van Walstijn, Maarten

    2012-01-01

    Analysis of the acoustical functioning of musical instruments invariably involves the estimation of model parameters. The broad aim of this paper is to develop methods for estimation of clarinet reed parameters that are representative of actual playing conditions. This presents various challenges because of the di?culties of measuring the directly relevant variables without interfering with the control of the instrument. An inverse modelling approach is therefore proposed, in which the equati...

  16. Estimation of differential code biases for Beidou navigation system using multi-GNSS observations: How stable are the differential satellite and receiver code biases?

    Science.gov (United States)

    Xue, Junchen; Song, Shuli; Zhu, Wenyao

    2016-04-01

    Differential code biases (DCBs) are important parameters that must be estimated accurately and reliably for high-precision GNSS applications. For optimal operational service performance of the Beidou navigation system (BDS), continuous monitoring and constant quality assessment of the BDS satellite DCBs are crucial. In this study, a global ionospheric model was constructed based on a dual system BDS/GPS combination. Daily BDS DCBs were estimated together with the total electron content from 23 months' multi-GNSS observations. The stability of the resulting BDS DCB estimates was analyzed in detail. It was found that over a long period, the standard deviations (STDs) for all satellite B1-B2 DCBs were within 0.3 ns (average: 0.19 ns) and for all satellite B1-B3 DCBs, the STDs were within 0.36 ns (average: 0.22 ns). For BDS receivers, the STDs were greater than for the satellites, with most values <2 ns. The DCBs of different receiver families are different. Comparison of the statistics of the short-term stability of satellite DCBs over different time intervals revealed that the difference in STD between 28- and 7-day intervals was small, with a maximum not exceeding 0.06 ns. In almost all cases, the difference in BDS satellite DCBs between two consecutive days was <0.8 ns. The main conclusion is that because of the stability of the BDS DCBs, they only require occasional estimation or calibration. Furthermore, the 30-day averaged satellite DCBs can be used reliably for the most demanding BDS applications.

  17. The impact of spurious shear on cosmological parameter estimates from weak lensing observables

    CERN Document Server

    Petri, Andrea; Haiman, Zoltan; Kratochvil, Jan M

    2014-01-01

    Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear with a power spectrum estimated for the LSST instrument. This allows us to quantify the errors and biases of the triplet $(\\Omega_m,w,\\sigma_8)$ derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MF), low--order moments (LM), and peak counts (PK). Our main results are: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of $\\sigma^2_{sys}\\approx 10^{-7}$, biases from the PS and LM would be ...

  18. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    Science.gov (United States)

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  19. Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget

    Science.gov (United States)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  20. Performances of Different Algorithms for Tracer Kinetics Parameters Estimation in Breast DCE-MRI

    Directory of Open Access Journals (Sweden)

    Roberta Fusco

    2014-07-01

    Full Text Available Objective of this study was to evaluate the performances of different algorithms for tracer kinetics parameters estimation in breast Dynamic Contrast Enhanced-MRI. We considered four algorithms: two non-iterative algorithms based on impulsive and linear approximation of the Arterial Input Function respectively; and two iterative algorithms widely used for non-linear regression (Levenberg-Marquardt, LM and VARiable PROjection, VARPRO. Per each value of the kinetic parameters within a physiological range, we simulated 100 noisy curves and estimated the parameters with all algorithms. Sampling time, total duration and noise level have been chosen as in a typical breast examination. We compared the performances with respect to the Cramer-Rao Lower Bound (CRLB. Moreover, in order to gain further insight we applied the algorithms to a real breast examination. Accuracy of all the methods depends on the specific value of the parameters. The methods are in general biased: however, VARPRO showed small bias in a region of the parameter space larger than the other methods; moreover, VARPRO approached CRLB and the number of iterations were smaller than LM. In the specific conditions analyzed, VARPRO showed better performances with respect to LM and to non-iterative algorithms

  1. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  2. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048

  3. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  4. A Fully Conditional Estimation Procedure for Rasch Model Parameters.

    Science.gov (United States)

    Choppin, Bruce

    A strategy for overcoming problems with the Rasch model's inability to handle missing data involves a pairwise algorithm which manipulates the data matrix to separate out the information needed for the estimation of item difficulty parameters in a test. The method of estimation compares two or three items at a time, separating out the ability…

  5. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...

  6. Variance gamma process simulation and it's parameters estimation

    OpenAIRE

    Kuzmina, A. V.

    2010-01-01

    Variance gamma process is a three parameter process. Variance gamma process is simulated as a gamma time-change Brownian motion and as a difference of two independent gamma processes. Estimations of simulated variance gamma process parameters are presented in this paper.

  7. Computational methods for estimation of parameters in hyperbolic systems

    Science.gov (United States)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  8. Estimation of Parameters of the Beta-Extreme Value Distribution

    Directory of Open Access Journals (Sweden)

    Zafar Iqbal

    2008-09-01

    Full Text Available In this research paper The Beta Extreme Value Type (III distribution which is developed by Zafar and Aleem (2007 is considered and parameters are estimated by using moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m’ & ‘n’ are real and moments of the Beta-Extreme Value (Type III Distribution when the parameters ‘m��� & ‘n’ are integers and then a Comparison between rth moments about origin when parameters are ‘m’ & ‘n’ are real and when parameters are ‘m’ & ‘n’ are integers. At the end second method, method of Maximum Likelihood is used to estimate the unknown parameters of the Beta Extreme Value Type (III distribution.

  9. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  10. Systematic errors in low latency gravitational wave parameter estimation impact electromagnetic follow-up observations

    CERN Document Server

    Littenberg, Tyson B; Coughlin, Scott; Kalogera, Vicky

    2016-01-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects' spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by $>5\\sigma$ using simple-precession waveforms and in excess of $20\\sigma$ when non-spinning templates are employed This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find searched areas are up to a ...

  11. Parameter Estimation and Experimental Design in Groundwater Modeling

    Institute of Scientific and Technical Information of China (English)

    SUN Ne-zheng

    2004-01-01

    This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.

  12. Global parameter estimation methods for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Poovathingal Suresh

    2010-08-01

    Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter

  13. Re-constructing historical Adelie penguin abundance estimates by retrospectively accounting for detection bias.

    Directory of Open Access Journals (Sweden)

    Colin Southwell

    Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.

  14. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  15. Sources of bias in peoples' social-comparative estimates of food consumption.

    Science.gov (United States)

    Scherer, Aaron M; Bruchmann, Kathryn; Windschitl, Paul D; Rose, Jason P; Smith, Andrew R; Koestner, Bryan; Snetselaar, Linda; Suls, Jerry

    2016-06-01

    Understanding how healthfully people think they eat compared to others has implications for their motivation to engage in dietary change and the adoption of health recommendations. Our goal was to investigate the scope, sources, and measurements of bias in comparative food consumption beliefs. Across 4 experiments, participants made direct comparisons of how their consumption compared to their peers' consumption and/or estimated their personal consumption of various foods/nutrients and the consumption by peers, allowing the measurement of indirect comparisons. Critically, the healthiness and commonness of the foods varied. When the commonness and healthiness of foods both varied, indirect comparative estimates were more affected by the healthiness of the food, suggesting a role for self-serving motivations, while direct comparisons were more affected by the commonness of the food, suggesting egocentrism as a nonmotivated source of comparative bias. When commonness did not vary, the healthiness of the foods impacted both direct and indirect comparisons, with a greater influence on indirect comparisons. These results suggest that both motivated and nonmotivated sources of bias should be taken into account when creating interventions aimed at improving eating habits and highlights the need for researchers to be sensitive to how they measure perceptions of comparative eating habits. (PsycINFO Database Record PMID:27054551

  16. Image-driven parameter estimation for low grade gliomas

    CERN Document Server

    Gholami, Amir; Biros, George

    2014-01-01

    We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate tumor infiltration into the brain parenchyma for a reaction-diffusion tumor growth model. We use a constrained optimization formulation that results in a system of nonlinear partial differential equations (PDEs). In our formulation, we estimate the parameters using the data from segmented images at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging (DTI). The parameters we seek to estimate are the spatial tumor concentration and the extent of anisotropic tumor diffusion. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation, outline the numerical algorithms and conclude with numerical experiments on synthetic datasets. Our results show the feasibility of the proposed methodology.

  17. The Robustness Optimization of Parameter Estimation in Chaotic Control Systems

    Directory of Open Access Journals (Sweden)

    Zhen Xu

    2014-10-01

    Full Text Available Standard particle swarm optimization algorithm has problems of bad adaption and weak robustness in the parameter estimation model of chaotic control systems. In light of this situation, this paper puts forward a new estimation model based on improved particle swarm optimization algorithm. It firstly constrains the search space of the population with Tent and Logistic double mapping to regulate the initialized population size, optimizes the fitness value by evolutionary state identification strategy so as to avoid its premature convergence, optimizes the inertia weight by the nonlinear decrease strategy to reach better global and local optimal solution, and then optimizes the iteration of particle swarm optimization algorithm with the hybridization concept from genetic algorithm. Finally, this paper applies it into the parameter estimation of chaotic systems control. Simulation results show that the proposed parameter estimation model shows higher accuracy, anti-noise ability and robustness compared with the model based on standard particle swarm optimization algorithm.

  18. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  19. Assessment of exploration bias in data-driven predictive models and the estimation of undiscovered resources

    Science.gov (United States)

    Coolbaugh, M.F.; Raines, G.L.; Zehner, R.E.

    2007-01-01

    The spatial distribution of discovered resources may not fully mimic the distribution of all such resources, discovered and undiscovered, because the process of discovery is biased by accessibility factors (e.g., outcrops, roads, and lakes) and by exploration criteria. In data-driven predictive models, the use of training sites (resource occurrences) biased by exploration criteria and accessibility does not necessarily translate to a biased predictive map. However, problems occur when evidence layers correlate with these same exploration factors. These biases then can produce a data-driven model that predicts known occurrences well, but poorly predicts undiscovered resources. Statistical assessment of correlation between evidence layers and map-based exploration factors is difficult because it is difficult to quantify the "degree of exploration." However, if such a degree-of-exploration map can be produced, the benefits can be enormous. Not only does it become possible to assess this correlation, but it becomes possible to predict undiscovered, instead of discovered, resources. Using geothermal systems in Nevada, USA, as an example, a degree-of-exploration model is created, which then is resolved into purely explored and unexplored equivalents, each occurring within coextensive study areas. A weights-of-evidence (WofE) model is built first without regard to the degree of exploration, and then a revised WofE model is calculated for the "explored fraction" only. Differences in the weights between the two models provide a correlation measure between the evidence and the degree of exploration. The data used to build the geothermal evidence layers are perceived to be independent of degree of exploration. Nevertheless, the evidence layers correlate with exploration because exploration has preferred the same favorable areas identified by the evidence patterns. In this circumstance, however, the weights for the "explored" WofE model minimize this bias. Using these revised

  20. Estimation of bias and variance of measurements made from tomography scans

    Science.gov (United States)

    Bradley, Robert S.

    2016-09-01

    Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.

  1. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    CERN Document Server

    Albers, DJ

    2011-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  2. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  3. The Minimax Estimator of Stochastic Regression Coefficients and Parameters in the Class of All Estimators

    Institute of Scientific and Technical Information of China (English)

    Li Wen XU; Song Gui WANG

    2007-01-01

    In this paper, the authors address the problem of the minimax estimator of linear com-binations of stochastic regression coefficients and parameters in the general normal linear model with random effects. Under a quadratic loss function, the minimax property of linear estimators is inves- tigated. In the class of all estimators, the minimax estimator of estimable functions, which is unique with probability 1, is obtained under a multivariate normal distribution.

  4. Evaluating parasite densities and estimation of parameters in transmission systems

    Directory of Open Access Journals (Sweden)

    Heinzmann D.

    2008-09-01

    Full Text Available Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence or estimates of the parasite biomass within the host population (abundance or intensity estimates. Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.

  5. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  6. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    Science.gov (United States)

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  7. Estimation of dynamical model parameters taking into account undetectable marker values

    Directory of Open Access Journals (Sweden)

    Trimoulet Pascale

    2006-08-01

    Full Text Available Abstract Background Mathematical models are widely used for studying the dynamic of infectious agents such as hepatitis C virus (HCV. Most often, model parameters are estimated using standard least-square procedures for each individual. Hierarchical models have been proposed in such applications. However, another issue is the left-censoring (undetectable values of plasma viral load due to the lack of sensitivity of assays used for quantification. A method is proposed to take into account left-censored values for estimating parameters of non linear mixed models and its impact is demonstrated through a simulation study and an actual clinical trial of anti-HCV drugs. Methods The method consists in a full likelihood approach distinguishing the contribution of observed and left-censored measurements assuming a lognormal distribution of the outcome. Parameters of analytical solution of system of differential equations taking into account left-censoring are estimated using standard software. Results A simulation study with only 14% of measurements being left-censored showed that model parameters were largely biased (from -55% to +133% according to the parameter with the exception of the estimate of initial outcome value when left-censored viral load values are replaced by the value of the threshold. When left-censoring was taken into account, the relative bias on fixed effects was equal or less than 2%. Then, parameters were estimated using the 100 measurements of HCV RNA available (with 12% of left-censored values during the first 4 weeks following treatment initiation in the 17 patients included in the trial. Differences between estimates according to the method used were clinically significant, particularly on the death rate of infected cells. With the crude approach the estimate was 0.13 day-1 (95% confidence interval [CI]: 0.11; 0.17 compared to 0.19 day-1 (CI: 0.14; 0.26 when taking into account left-censoring. The relative differences between

  8. Misleading population estimates: biases and consistency of visual surveys and matrix modelling in the endangered bearded vulture.

    Directory of Open Access Journals (Sweden)

    Antoni Margalida

    Full Text Available Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the "cryptic" fraction of the population and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (Gypaetus barbatus population from visual surveys performed at supplementary feeding stations with data derived from population matrix-modelling approximations. Our results suggest that visual surveys overestimate the number of immature (6 y.o. were underestimated in comparison with the predictions of a population model using a stable-age distribution. In addition, we found that visual surveys did not provide conclusive information on true variations in the size of the focal population. Our results suggest that although long-term studies (i.e. population matrix modelling based on capture-recapture procedures are a more time-consuming method, they provide more reliable and robust estimates of population parameters needed in designing and applying conservation strategies. The findings shown here are likely transferable to the management and conservation of other long-lived vertebrate populations that share similar life-history traits and ecological requirements.

  9. Error and bias in size estimates of whale sharks: implications for understanding demography.

    Science.gov (United States)

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  10. Error and bias in size estimates of whale sharks: implications for understanding demography

    Science.gov (United States)

    Sequeira, Ana M. M.; Thums, Michele; Brooks, Kim; Meekan, Mark G.

    2016-01-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  11. Cosmological Parameter Estimation and Window Function in Counts-in-Cell Analysis

    Science.gov (United States)

    Murata, Y.; Matsubara, T.

    2006-11-01

    We estimate the cosmological parameter bounds expected from the counts-in-cells analysis of the galaxy distributions of SDSS samples, which are the Main Galaxies (MGs) and the Luminous Red Galaxies (LRGs). We use the m-weight Epanechnikov kernel as window function with expectation of improving the bounds of parameters. We apply the Fisher Information Matrix Analysis, which can estimate the minimum expected parameter bounds without any data. In this analysis, we derive the covariance matrix that includes the consideration of overlapping of cells. As a result, we found that the signal to noise of the LRG sample is bigger than that of the MG sample because the range of data using is only linear scale. Therefore, the LRG sample is more suitable for parameter estimation. For the LRG sample, about six hundred data points are sufficient to get maximum effect on parameter bounds. Large parameter set results in poor bounds because of degeneracy, the matter density, the baryon fraction, the neutrino density and σ2 8 including the amplitude of the power spectrum, the linear bias and the Kaiser effect seems to be an appropriate set.

  12. Parameter estimation for chaotic systems by particle swarm optimization

    International Nuclear Information System (INIS)

    Parameter estimation for chaotic systems is an important issue in nonlinear science and has attracted increasing interests from various research fields, which could be essentially formulated as a multi-dimensional optimization problem. As a novel evolutionary computation technique, particle swarm optimization (PSO) has attracted much attention and wide applications, owing to its simple concept, easy implementation and quick convergence. However, to the best of our knowledge, there is no published work on PSO for estimating parameters of chaotic systems. In this paper, a PSO approach is applied to estimate the parameters of Lorenz system. Numerical simulation and the comparisons demonstrate the effectiveness and robustness of PSO. Moreover, the effect of population size on the optimization performances is investigated as well

  13. Adaptive distributed parameter and input estimation in linear parabolic PDEs

    KAUST Repository

    Mechhoud, Sarra

    2016-01-01

    In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.

  14. Parameter estimation with an iterative version of the adaptive Gaussian mixture filter

    Science.gov (United States)

    Stordal, A.; Lorentzen, R.

    2012-04-01

    The adaptive Gaussian mixture filter (AGM) was introduced in Stordal et. al. (ECMOR 2010) as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one pre-determined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decrease with increasing sample size, the filter can achieve asymptotic optimality. For large scale applications with a limited sample size the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of

  15. An own-age bias in age estimation of faces in children and adults.

    OpenAIRE

    Moyse, Evelyne; Brédart, Serge

    2010-01-01

    The aim of the present study was to assess the occurence of an own-age bias on age estimation performance (better performance for faces from the same age range as that of the beholder) by using an experimental design inspired from research on the own-race effect. The age of participants (10 to 14 year old children and 20 to 30 year old adults) was an independent factor that was crossed with the age of the stimuli (faces of 10 to 14 year old children and faces of 20 to 30 year old adults), the...

  16. GLONASS fractional-cycle bias estimation across inhomogeneous receivers for PPP ambiguity resolution

    Science.gov (United States)

    Geng, Jianghui; Bock, Yehuda

    2016-04-01

    The key issue to enable precise point positioning with ambiguity resolution (PPP-AR) is to estimate fractional-cycle biases (FCBs), which mainly relate to receiver and satellite hardware biases, over a network of reference stations. While this has been well achieved for GPS, FCB estimation for GLONASS is difficult because (1) satellites do not share the same frequencies as a result of Frequency Division Multiple Access (FDMA) signals; (2) and even worse, pseudorange hardware biases of receivers vary in an irregular manner with manufacturers, antennas, domes, firmware, etc., which especially complicates GLONASS PPP-AR over inhomogeneous receivers. We propose a general approach where external ionosphere products are introduced into GLONASS PPP to estimate precise FCBs that are less impaired by pseudorange hardware biases of diverse receivers to enable PPP-AR. One month of GLONASS data at about 550 European stations were processed. From an exemplary network of 51 inhomogeneous receivers, including four receiver types with various antennas and spanning about 800 km in both longitudinal and latitudinal directions, we found that 92.4 % of all fractional parts of GLONASS wide-lane ambiguities agree well within ± 0.15 cycles with a standard deviation of 0.09 cycles if global ionosphere maps (GIMs) are introduced, compared to only 51.7 % within ± 0.15 cycles and a larger standard deviation of 0.22 cycles otherwise. Hourly static GLONASS PPP-AR at 40 test stations can reach position estimates of about 1 and 2 cm in RMS from ground truth for the horizontal and vertical components, respectively, which is comparable to hourly GPS PPP-AR. Integrated GLONASS and GPS PPP-AR can further achieve an RMS of about 0.5 cm in horizontal and 1-2 cm in vertical components. We stress that the performance of GLONASS PPP-AR across inhomogeneous receivers depends on the accuracy of ionosphere products. GIMs have a modest accuracy of only 2-8 TECU (Total Electron Content Unit) in vertical

  17. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  18. CTER—Rapid estimation of CTF parameters with error assessment

    International Nuclear Information System (INIS)

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å

  19. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen; Ourselin, Sébastien; Ersbøll, Bjarne Kjær

    surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method is......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation...

  20. Accelerated gravitational-wave parameter estimation with reduced order modeling

    CERN Document Server

    Canizares, Priscilla; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2014-01-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current parameter estimation approaches for such scenarios can lead to computationally intractable problems in practice. Therefore there is a pressing need for new, fast and accurate Bayesian inference techniques. In this letter we demonstrate that a reduced order modeling approach enables rapid parameter estimation studies. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of non-spinning binary neutron star inspirals can be sped up by a factor of 30 for the early advanced detectors' configurations. This speed-up will increase to about $150$ as the detectors improve their low-frequency limit to 10Hz, reducing to hours analyses which would otherwise take months to complete. Although thes...

  1. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    S C Raisinghani; A K Ghosh

    2000-04-01

    Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic variables ortheirtime derivatives, is proposed. Furthermore, it is shown that such a neural model can be used to extract equivalent stability and control derivatives of a flexible aircraft. Results are presented for aircraft with different levels of flexibility to demonstrate the utility ofthe neural approach for both modelling and estimation of parameters.

  2. Estimates of Genetic Parameters for Racing Times of Thoroughbred Horses

    OpenAIRE

    EKİZ, Bülent; KOÇAK, Ömür

    2007-01-01

    The aim of this study was to estimate the genetic parameters for racing times, which are needed for a selection program of Thoroughbred horses in Turkey. The racing records used in the study were obtained from the Turkish Jockey Club. The trait used in the study was racing time for racing distances of 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, and 2400 m. The data from each racing distance were analyzed separately. Genetic parameters were estimated by REML procedure usi...

  3. On Modal Parameter Estimates from Ambient Vibration Tests

    DEFF Research Database (Denmark)

    Agneni, A.; Brincker, Rune; Coppotelli, B.

    2004-01-01

    Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...... excited by human activities, wind, gust, etc. In this paper, the comparison between two differeiit vibration testing techniques is presented. The first approach takes advantage of the Frequency Domain Decomposition, FDD, of the response cross power spectral densities to estimate both the natural...... parameters of two simple structures (a beam and a plate), excited by an acoustical random signal....

  4. Estimation of the elastic Earth parameters from the SLR technique

    Science.gov (United States)

    Rutkowska, Milena

    ABSTRACT. The global elastic parameters (Love and Shida numbers) associated with the tide variations for satellite and stations are estimated from the Satellite Laser Ranging (SLR) data. The study is based on satellite observations taken by the global network of the ground stations during the period from January 1, 2005 until January 1, 2007 for monthly orbital arcs of Lageos 1 satellite. The observation equations contain unknown for orbital arcs, some constants and elastic Earth parameters which describe tide variations. The adjusted values are discussed and compared with geophysical estimations of Love numbers. All computations were performed employing the NASA software GEODYN II (eddy et al. 1990).

  5. Usefulness of single nucleotide polymorphism data for estimating population parameters.

    OpenAIRE

    Kuhner, M K; Beerli, P; Yamato, J; Felsenstein, J

    2000-01-01

    Single nucleotide polymorphism (SNP) data can be used for parameter estimation via maximum likelihood methods as long as the way in which the SNPs were determined is known, so that an appropriate likelihood formula can be constructed. We present such likelihoods for several sampling methods. As a test of these approaches, we consider use of SNPs to estimate the parameter Theta = 4N(e)micro (the scaled product of effective population size and per-site mutation rate), which is related to the br...

  6. Parameter Estimation in Stochastic Differential Equations; An Overview

    DEFF Research Database (Denmark)

    Nielsen, Jan Nygaard; Madsen, Henrik; Young, P. C.

    2000-01-01

    This paper presents an overview of the progress of research on parameter estimation methods for stochastic differential equations (mostly in the sense of Ito calculus) over the period 1981-1999. These are considered both without measurement noise and with measurement noise, where the discretely...... observed stochastic differential equations are embedded in a continuous-discrete time state space model. Every attempts has been made to include results from other scientific disciplines. Maximum likelihood estimation of parameters in nonlinear stochastic differential equations is in general not possible...

  7. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  8. Estimation of effective hydrogeological parameters in heterogeneous and anisotropic aquifers

    Science.gov (United States)

    Lin, Hsien-Tsung; Tan, Yih-Chi; Chen, Chu-Hui; Yu, Hwa-Lung; Wu, Shih-Ching; Ke, Kai-Yuan

    2010-07-01

    SummaryObtaining reasonable hydrological input parameters is a key challenge in groundwater modeling. Analysis of temporal evolution during pump-induced drawdown is one common approach used to estimate the effective transmissivity and storage coefficients in a heterogeneous aquifer. In this study, we propose a Modified Tabu search Method (MTM), an improvement drawn from an alliance between the Tabu Search (TS) and the Adjoint State Method (ASM) developed by Tan et al. (2008). The latter is employed to estimate effective parameters for anisotropic, heterogeneous aquifers. MTM is validated by several numerical pumping tests. Comparisons are made to other well-known techniques, such as the type-curve method (TCM) and the straight-line method (SLM), to provide insight into the challenge of determining the most effective parameter for an anisotropic, heterogeneous aquifer. The results reveal that MTM can efficiently obtain the best representative and effective aquifer parameters in terms of the least mean square errors of the drawdown estimations. The use of MTM may involve less artificial errors than occur with TCM and SLM, and lead to better solutions. Therefore, effective transmissivity is more likely to be comprised of the geometric mean of all transmissivities within the cone of depression based on a precise estimation of MTM. Further investigation into the applicability of MTM shows that a higher level of heterogeneity in an aquifer can induce an uncertainty in estimations, while the changes in correlation length will affect the accuracy of MTM only once the degree of heterogeneity has also risen.

  9. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  10. Estimation of diffusion parameters for discretely observed diffusion processes

    OpenAIRE

    Sørensen, Helle

    2002-01-01

    We study the estimation of diffusion parameters for one-dimensional, ergodic diffusion processes that are discretely observed. We discuss a method based on a functional relationship between the drift function, the diffusion function and the invariant density and use empirical process theory to show that the estimator is $\\sqrt{n}$-consistent and in certain cases weakly convergent. The Chan-Karolyi-Longstaff-Sanders (CKLS) model is used as an example and a numerical example i...

  11. Estimation of dynamic stability parameters from drop model flight tests

    Science.gov (United States)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.

  12. Multi-criteria parameter estimation for the Unified Land Model

    OpenAIRE

    B. Livneh; Lettenmaier, D. P.

    2012-01-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operational Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water...

  13. Multi-criteria parameter estimation for the unified land model

    OpenAIRE

    B. Livneh; Lettenmaier, D. P.

    2012-01-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operation Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial wa...

  14. Comparison of Jump-Diffusion Parameters Using Passage Times Estimation

    Directory of Open Access Journals (Sweden)

    K. Khaldi

    2014-01-01

    Full Text Available The main purposes of this paper are two contributions: (1 it presents a new method, which is the first passage time generalized for all passage times (PT method, in order to estimate the parameters of stochastic jump-diffusion process. (2 It compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the PT method and those obtained by the moments method applied to the MJD model.

  15. Human ECG signal parameters estimation during controlled physical activity

    Science.gov (United States)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  16. Parameter estimation in systems biology models using spline approximation

    OpenAIRE

    Yeung Lam F; Zhan Choujun

    2011-01-01

    Abstract Background Mathematical models for revealing the dynamics and interactions properties of biological systems play an important role in computational systems biology. The inference of model parameter values from time-course data can be considered as a "reverse engineering" process and is still one of the most challenging tasks. Many parameter estimation methods have been developed but none of these methods is effective for all cases and can overwhelm all other approaches. Instead, vari...

  17. Parameter Estimation of Superimposed Damped Sinusoids Using Exponential Windows

    OpenAIRE

    Al-Radhawi, Muhammad Ali; Abed-Meraim, Karim

    2014-01-01

    This paper presents a preprocessing technique based on exponential windowing (EW) for parameter estimation of superimposed exponentially damped sinusoids. It is shown that the EW technique significantly improves the robustness to noise over two other commonly used preprocessing techniques: subspace decomposition and higher order statistics. An ad-hoc but efficient approach for the EW parameter selection is provided and shown to provide close to CRB performance.

  18. Biases in the determination of dynamical parameters of star clusters: today and in the Gaia era

    CERN Document Server

    Sollima, A; Zocchi, A; Balbinot, E; Gieles, M; Henault-Brunet, V; Varri, A L

    2015-01-01

    The structural and dynamical properties of star clusters are generally derived by means of the comparison between steady-state analytic models and the available observables. With the aim of studying the biases of this approach, we fitted different analytic models to simulated observations obtained from a suite of direct N-body simulations of star clusters in different stages of their evolution and under different levels of tidal stress to derive mass, mass function and degree of anisotropy. We find that masses can be under/over-estimated up to 50% depending on the degree of relaxation reached by the cluster, the available range of observed masses and distances of radial velocity measures from the cluster center and the strength of the tidal field. The mass function slope appears to be better constrainable and less sensitive to model inadequacies unless strongly dynamically evolved clusters and a non-optimal location of the measured luminosity function are considered. The degree and the characteristics of the ...

  19. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  20. Bias-corrected Pearson estimating functions for Taylor’s power law applied to benthic macrofauna data

    DEFF Research Database (Denmark)

    Jørgensen, Bent; Demétrio, Clarice G.B.; Kristensen, Erik;

    2011-01-01

    Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...... function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass...

  1. Optimal measurement locations for parameter estimation of non linear distributed parameter systems

    Directory of Open Access Journals (Sweden)

    J. E. Alaña

    2010-12-01

    Full Text Available A sensor placement approach for the purpose of accurately estimating unknown parameters of a distributed parameter system is discussed. The idea is to convert the sensor location problem to a classical experimental design. The technique consists of analysing the extrema values of the sensitivity coefficients derived from the system and their corresponding spatial positions. This information is used to formulate an efficient computational optimum experiment design on discrete domains. The scheme studied is verified by a numerical example regarding the chemical reaction in a tubular reactor for two possible scenarios; stable and unstable operation conditions. The resulting approach is easy to implement and good estimates for the parameters of the system are obtained. This study shows that the measurement location plays an essential role in the parameter estimation procedure.

  2. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  3. Simultaneous estimation of ionospheric delays and receiver differential code bias by a single GPS station

    International Nuclear Information System (INIS)

    In this paper, we propose an efficient Kalman filter algorithm for simultaneous estimation of absolute ionospheric delay and receiver differential code bias. It is well known that the two quantities are mixed in global positioning system (GPS) measurements causing a rank-deficiency problem. The proposed method requires only the broadcast navigation message and basic measurements provided by a single dual-frequency GPS receiver. Thus, it would be convenient to monitor local ionospheric activities autonomously in real time with minimum hardware. An observability analysis was performed in detail why the simultaneous estimation is possible in spite of the rank deficiency occurring in epoch-by-epoch measurements. Experimental results based on real GPS measurements demonstrate the feasibility of the proposed method. (paper)

  4. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  5. Bias in regression coefficient estimates when assumptions for handling missing data are violated: a simulation study

    Directory of Open Access Journals (Sweden)

    Sander MJ van Kuijk

    2016-03-01

    Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.

  6. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  7. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  8. Online vegetation parameter estimation using passive microwave remote sensing observations

    Science.gov (United States)

    In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...

  9. Estimation of Parameters of Gravitational Waves from Pulsars

    OpenAIRE

    Krolak, A

    1997-01-01

    The problem of search for nearly periodic gravitational wave sources in the data from laser interferometric detectors is discussed using a simple model of the signal. Accuracies of estimation of the parameters and computational requirements to do the search are assessed.

  10. Stability of Parameter Estimates in the Split Population Exponential Distribution.

    Science.gov (United States)

    Miley, Alan D.

    1978-01-01

    The split-population exponential design suggested by Maltz and McCleary to predict parolee recidivism (TM 502 998) was applied to discharged psychiatric inpatients. Parameter estimates changed systematically as greater and greater observation time was allowed in the computation, thus limiting extrapolability. (Author/GDC)

  11. A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri; Rao, Bhaskar D.

    This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...

  12. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  13. Parameter Estimates in Differential Equation Models for Population Growth

    Science.gov (United States)

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  14. A parameter estimation framework for patient-specific hemodynamic computations

    Science.gov (United States)

    Itu, Lucian; Sharma, Puneet; Passerini, Tiziano; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2015-01-01

    We propose a fully automated parameter estimation framework for performing patient-specific hemodynamic computations in arterial models. To determine the personalized values of the windkessel models, which are used as part of the geometrical multiscale circulation model, a parameter estimation problem is formulated. Clinical measurements of pressure and/or flow-rate are imposed as constraints to formulate a nonlinear system of equations, whose fixed point solution is sought. A key feature of the proposed method is a warm-start to the optimization procedure, with better initial solution for the nonlinear system of equations, to reduce the number of iterations needed for the calibration of the geometrical multiscale models. To achieve these goals, the initial solution, computed with a lumped parameter model, is adapted before solving the parameter estimation problem for the geometrical multiscale circulation model: the resistance and the compliance of the circulation model are estimated and compensated. The proposed framework is evaluated on a patient-specific aortic model, a full body arterial model, and multiple idealized anatomical models representing different arterial segments. For each case it leads to the best performance in terms of number of iterations required for the computational model to be in close agreement with the clinical measurements.

  15. Parameter estimation of an air-bearing suspended test table

    Science.gov (United States)

    Fu, Zhenxian; Lin, Yurong; Liu, Yang; Chen, Xinglin; Chen, Fang

    2015-02-01

    A parameter estimation approach is proposed for parameter determination of a 3-axis air-bearing suspended test table. The table is to provide a balanced and frictionless environment for spacecraft ground test. To balance the suspension, the mechanical parameters of the table, including its angular inertias and centroid deviation from its rotating center, have to be determined first. Then sliding masses on the table can be adjusted by stepper motors to relocate the centroid of the table to its rotating center. Using the angular momentum theorem and the coriolis theorem, dynamic equations are derived describing the rotation of the table under the influence of gravity imbalance torque and activating torques. To generate the actuating torques, use of momentum wheels is proposed, whose virtue is that no active control is required to the momentum wheels, which merely have to spin at constant rates, thus avoiding the singularity problem and the difficulty of precisely adjusting the output torques, issues associated with control moment gyros. The gyroscopic torques generated by the momentum wheels, as they are forced by the table to precess, are sufficient to activate the table for parameter estimation. Then least-square estimation is be employed to calculate the desired parameters. The effectiveness of the method is validated by simulation.

  16. A parameter identifiability and estimation study in Yesilirmak River.

    Science.gov (United States)

    Berber, R; Yuceer, M; Karadurmus, E

    2009-01-01

    Water quality models have relatively large number of parameters, which need to be estimated against observed data through a non-trivial task that is associated with substantial difficulties. This work involves a systematic model calibration and validation study for river water quality. The model considered was composed of dynamic mass balances for eleven pollution constituents, stemming from QUAL2E water quality model by considering a river segment as a series of continuous stirred-tank reactors (CSTRs). Parameter identifiability was analyzed from the perspective of sensitivity measure and collinearity index, which indicated that 8 parameters would fall within the identifiability range. The model parameters were then estimated by an integration based optimization algorithm coupled with sequential quadratic programming. Dynamic field data consisting of major pollutant concentrations were collected from sampling stations along Yesilirmak River around the city of Amasya in Turkey, and compared with model predictions. The calibrated model responses were in good agreement with the observed river water quality data, and this indicated that the suggested procedure provided an effective means for reliable estimation of model parameters and dynamic simulation for river streams. PMID:19214006

  17. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea

    KAUST Repository

    Sawlan, Zaid A

    2012-12-01

    Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.

  18. PARAMETER ESTIMATION METHODOLOGY FOR NONLINEAR SYSTEMS: APPLICATION TO INDUCTION MOTOR

    Institute of Scientific and Technical Information of China (English)

    G.KENNE; F.FLORET; H.NKWAWO; F.LAMNABHI-LAGARRIGUE

    2005-01-01

    This paper deals with on-line state and parameter estimation of a reasonably large class of nonlinear continuous-time systems using a step-by-step sliding mode observer approach. The method proposed can also be used for adaptation to parameters that vary with time. The other interesting feature of the method is that it is easily implementable in real-time. The efficiency of this technique is demonstrated via the on-line estimation of the electrical parameters and rotor flux of an induction motor. This application is based on the standard model of the induction motor expressed in rotor coordinates with the stator current and voltage as well as the rotor speed assumed to be measurable.Real-time implementation results are then reported and the ability of the algorithm to rapidly estimate the motor parameters is demonstrated. These results show the robustness of this approach with respect to measurement noise, discretization effects, parameter uncertainties and modeling inaccuracies.Comparisons between the results obtained and those of the classical recursive least square algorithm are also presented. The real-time implementation results show that the proposed algorithm gives better performance than the recursive least square method in terms of the convergence rate and the robustness with respect to measurement noise.

  19. Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images

    Science.gov (United States)

    Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.

    2016-04-01

    Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.

  20. Modal parameters estimation using ant colony optimisation algorithm

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  1. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Science.gov (United States)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  2. Power Network Parameter Estimation Method Based on Data Mining Technology

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian

    2008-01-01

    The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.

  3. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    Science.gov (United States)

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  4. Being surveyed can change later behavior and related parameter estimates.

    Science.gov (United States)

    Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Karlan, Dean S; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit

    2011-02-01

    Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314

  5. A possible bias on the estimate of Lbol/Ledd in AGN as a function of luminosity and redshift

    CERN Document Server

    Lamastra, A; Perola, G C; Lamastra, Alessandra; Matt, Giorgio

    2006-01-01

    The BH mass (and the related Eddington ratio) in broad line AGN is usually evaluated by combining estimates (often indirect) of the BLR radius and of the FWHM of the broad lines, under the assumption that the BLR clouds are in Keplerian motion around the BH. Such an evaluation depends on the geometry of the BLR. There are two major options for the BLR configuration: spherically symmetric or ``flattened''. In the latter case the inclination to the line of sight becomes a relevant parameter. This paper is devoted to evaluate the bias on the estimate of the Eddington ratio when a spherical geometry is assumed (more generally when inclination effects are ignored), while the actual configuration is ``flattened'', as some evidence suggests. This is done as a function of luminosity and redshift, on the basis of recent results which show the existence of a correlation between the fraction of obscured AGN and these two parameters up to at least z=2.5. The assumed BLR velocity field is akin to the ``generalized thick d...

  6. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    Directory of Open Access Journals (Sweden)

    Jonathan R Karr

    2015-05-01

    Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  7. Estimating Sampling Biases and Measurement Uncertainties of AIRS-AMSU-A Temperature and Water Vapor Observations Using MERRA Reanalysis

    Science.gov (United States)

    Hearty, Thomas J.; Savtchenko, Andrey K.; Tian, Baijun; Fetzer, Eric; Yung, Yuk L.; Theobald, Michael; Vollmer, Bruce; Fishbein, Evan; Won, Young-In

    2014-01-01

    We use MERRA (Modern Era Retrospective-Analysis for Research Applications) temperature and water vapor data to estimate the sampling biases of climatologies derived from the AIRS/AMSU-A (Atmospheric Infrared Sounder/Advanced Microwave Sounding Unit-A) suite of instruments. We separate the total sampling bias into temporal and instrumental components. The temporal component is caused by the AIRS/AMSU-A orbit and swath that are not able to sample all of time and space. The instrumental component is caused by scenes that prevent successful retrievals. The temporal sampling biases are generally smaller than the instrumental sampling biases except in regions with large diurnal variations, such as the boundary layer, where the temporal sampling biases of temperature can be +/- 2 K and water vapor can be 10% wet. The instrumental sampling biases are the main contributor to the total sampling biases and are mainly caused by clouds. They are up to 2 K cold and greater than 30% dry over mid-latitude storm tracks and tropical deep convective cloudy regions and up to 20% wet over stratus regions. However, other factors such as surface emissivity and temperature can also influence the instrumental sampling bias over deserts where the biases can be up to 1 K cold and 10% wet. Some instrumental sampling biases can vary seasonally and/or diurnally. We also estimate the combined measurement uncertainties of temperature and water vapor from AIRS/AMSU-A and MERRA by comparing similarly sampled climatologies from both data sets. The measurement differences are often larger than the sampling biases and have longitudinal variations.

  8. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    Science.gov (United States)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  9. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik; Arvin, Erik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the...... same values. Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

  10. Concurrent learning for parameter estimation using dynamic state-derivative estimators

    OpenAIRE

    Kamalapurkar, Rushikesh; Reish, Ben; Chowdhary, Girish; Dixon, Warren E.

    2015-01-01

    A concurrent learning (CL)-based parameter estimator is developed to identify the unknown parameters in a linearly parameterized uncertain control-affine nonlinear system. Unlike state-of-the-art CL techniques that assume knowledge of the state-derivative or rely on numerical smoothing, CL is implemented using a dynamic state-derivative estimator. A novel purging algorithm is introduced to discard possibly erroneous data recorded during the transient phase for concurrent learning. Since purgi...

  11. Inter-system biases estimation in multi-GNSS relative positioning with GPS and Galileo

    Science.gov (United States)

    Deprez, Cecile; Warnant, Rene

    2016-04-01

    The recent increase in the number of Global Navigation Satellite Systems (GNSS) opens new perspectives in the field of high precision positioning. Particularly, the European Galileo program has experienced major progress in 2015 with the launch of 6 satellites belonging to the new Full Operational Capability (FOC) generation. Associated with the ongoing GPS modernization, many more frequencies and satellites are now available. Therefore, multi-GNSS relative positioning based on GPS and Galileo overlapping frequencies should entail better accuracy and reliability in position estimations. However, the differences between satellite systems induce inter-system biases (ISBs) inside the multi-GNSS equations of observation. Once these biases estimated and removed from the model, a solution involving a unique pivot satellite for the two considered constellations can be obtained. Such an approach implies that the addition of even one single Galileo satellite to the GPS-only model will strengthen it. The combined use of L1 and L5 from GPS with E1 and E5a from Galileo in zero baseline double differences (ZB DD) based on a unique pivot satellite is employed to resolve ISBs. This model removes all the satellite- and receiver-dependant error sources by differentiating and the zero baseline configuration allows atmospheric and multipath effects elimination. An analysis of the long-term stability of ISBs is conducted on various pairs of receivers over large time spans. The possible influence of temperature variations inside the receivers over ISB values is also investigated. Our study is based on the 5 multi-GNSS receivers (2 Septentrio PolaRx4, 1 Septentrio PolaRxS and 2 Trimble NetR9) installed on the roof of our building in Liege. The estimated ISBs are then used as corrections in the multi-GNSS observation model and the resulting accuracy of multi-GNSS positioning is compared to GPS and Galileo standalone solutions.

  12. Seafloor elastic parameters estimation based on AVO inversion

    Science.gov (United States)

    Liu, Yangting; Liu, Xuewei

    2015-12-01

    Seafloor elastic parameters play an important role in many fields as diverse as marine construction, seabed resources exploration and seafloor acoustics. In order to estimate seafloor elastic parameters, we perform AVO inversion with seafloor reflected seismic data. As a particular reflection interface, the seafloor reflector does not support S-waves and the elastic parameters change dramatically across it. Conventional approximations to the Zoeppritz equations are not applicable for the seafloor situation. In this paper, we perform AVO inversion with the exact Zoeppritz equations through an unconstrained optimization method. Our synthetic study proves that the inversion method does not show strong dependence on the initial model for both unconsolidated and semi-consolidated seabed situations. The inversion uncertainty of the elastic parameters increases with the noise level, and decreases with the incidence angle range. Finally, we perform inversion of data from the South China Sea, and obtain satisfactory results, which are in good agreement with previous research.

  13. Cosmological parameter estimation using Particle Swarm Optimization (PSO)

    CERN Document Server

    Prasad, Jayanti

    2011-01-01

    Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...

  14. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  15. Informed spectral analysis: audio signal parameter estimation using side information

    Science.gov (United States)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  16. Estimation of atmospheric parameters from time-lapse imagery

    Science.gov (United States)

    McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.

    2016-05-01

    A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.

  17. Visco-piezo-elastic parameter estimation in laminated plate structures

    DEFF Research Database (Denmark)

    Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.;

    2009-01-01

    measured natural frequencies of free vibration and corresponding modal loss factors. An equivalent single layer higher order numerical model is used for the free vibration analysis of active laminated plate structures and the response of the model is adjusted in order to match the experimental data, hence......A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of...... data. Results are presented for the estimation of elastic, piezoelectric and viscoelastic properties in laminated plates....

  18. J-A Hysteresis Model Parameters Estimation using GA

    Directory of Open Access Journals (Sweden)

    Bogomir Zidaric

    2005-01-01

    Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.

  19. Estimation and Bias Correction of Aerosol Abundance using Data-driven Machine Learning and Remote Sensing

    Science.gov (United States)

    Malakar, Nabin K.; Lary, D. L.; Moore, A.; Gencaga, D.; Roscoe, B.; Albayrak, Arif; Petrenko, Maksym; Wei, Jennifer

    2012-01-01

    Air quality information is increasingly becoming a public health concern, since some of the aerosol particles pose harmful effects to peoples health. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. The comparison between the AOD measured from the ground-based Aerosol Robotic Network (AERONET) system and the satellite MODIS instruments at 550 nm shows that there is a bias between the two data products. We performed a comprehensive analysis exploring possible factors which may be contributing to the inter-instrumental bias between MODIS and AERONET. The analysis used several measured variables, including the MODIS AOD, as input in order to train a neural network in regression mode to predict the AERONET AOD values. This not only allowed us to obtain an estimate, but also allowed us to infer the optimal sets of variables that played an important role in the prediction. In addition, we applied machine learning to infer the global abundance of ground level PM2.5 from the AOD data and other ancillary satellite and meteorology products. This research is part of our goal to provide air quality information, which can also be useful for global epidemiology studies.

  20. On an algebraic method for derivatives estimation and parameter estimation for partial derivatives systems

    OpenAIRE

    Ushirobira, Rosane; Korporal, Anja; PERRUQUETTI, Wilfrid

    2014-01-01

    International audience — In this communication, we discuss two estimation problems dealing with partial derivatives systems. Namely, estimating partial derivatives of a multivariate noisy signal and identifying parameters of partial differential equations. The multivariate noisy signal is expressed as a truncated Taylor expression in a small time interval. An algebraic method can be then used to estimate its partial derivatives in the opera-tional domain. The same approach applies for the ...

  1. Specification and estimation of sources of bias affecting neurological studies in PET/MR with an anatomical brain phantom

    International Nuclear Information System (INIS)

    Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template. -- Highlights: • Comparison between PET, PET/CT and PET/MR was performed with a novel brain phantom. • The performance of reconstruction and attenuation correction in PET/MR was studied. • A recently developed brain phantom was found feasible for PET/MR imaging. • Contrast reduction

  2. Are risk estimates biased in follow-up studies of psychosocial factors with low base-line participation?

    DEFF Research Database (Denmark)

    Kaerlev, Linda; Kolstad, Henrik A; Hansen, Ase Marie;

    2011-01-01

    Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population....

  3. Analysis of neutron scattering data: Visualization and parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Beauchamp, J.J.; Fedorov, V.; Hamilton, W.A.; Yethiraj, M.

    1998-09-01

    Traditionally, small-angle neutron and x-ray scattering (SANS and SAXS) data analysis requires measurements of the signal and corrections due to the empty sample container, detector efficiency and time-dependent background. These corrections are then made on a pixel-by-pixel basis and estimates of relevant parameters (e.g., the radius of gyration) are made using the corrected data. This study was carried out in order to determine whether treatment of the detector efficiency and empty sample cell in a more statistically sound way would significantly reduce the uncertainties in the parameter estimators. Elements of experiment design are shortly discussed in this paper. For instance, we studied the way the time for a measurement should be optimally divided between the counting for signal, background and detector efficiency. In Section 2 we introduce the commonly accepted models for small-angle neutron and x-scattering and confine ourselves to the Guinier and Rayleigh models and their minor generalizations. The traditional approaches of data analysis are discussed only to the extent necessary to allow their comparison with the proposed techniques. Section 3 describes the main stages of the proposed method: visual data exploration, fitting the detector sensitivity function, and fitting a compound model. This model includes three additive terms describing scattering by the sampler, scattering with an empty container and a background noise. We compare a few alternatives for the first term by applying various scatter plots and computing sums of standardized squared residuals. Possible corrections due to smearing effects and randomness of estimated parameters are also shortly discussed. In Section 4 the robustness of the estimators with respect to low and upper bounds imposed on the momentum value is discussed. We show that for the available data set the most accurate and stable estimates are generated by models containing double terms either of Guinier's or Rayleigh

  4. CosmoSIS: A System for MC Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  5. PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Samir Kamel Ashour

    2010-12-01

    Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.

  6. Estimating parameters of chaotic systems synchronized by external driving signal

    Energy Technology Data Exchange (ETDEWEB)

    Wu Xiaogang [Institute of PR and AI, Huazhong University of Science and Technology, Wuhan 430074 (China)]. E-mail: seanwoo@mail.hust.edu.cn; Wang Zuxi [Institute of PR and AI, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2007-07-15

    Noise-induced synchronization (NIS) has evoked great research interests recently. Two uncoupled identical chaotic systems can achieve complete synchronization (CS) by feeding a common noise with appropriate intensity. Actually, NIS belongs to the category of external feedback control (EFC). The significance of applying EFC in secure communication lies in fact that the trajectory of chaotic systems is disturbed so strongly by external driving signal that phase space reconstruction attack fails. In this paper, however, we propose an approach that can accurately estimate the parameters of the chaotic systems synchronized by external driving signal through chaotic transmitted signal, driving signal and their derivatives. Numerical simulation indicates that this approach can estimate system parameters and external coupling strength under two driving modes in a very rapid manner, which implies that EFC is not superior to other methods in secure communication.

  7. Estimating parameters of chaotic systems synchronized by external driving signal

    International Nuclear Information System (INIS)

    Noise-induced synchronization (NIS) has evoked great research interests recently. Two uncoupled identical chaotic systems can achieve complete synchronization (CS) by feeding a common noise with appropriate intensity. Actually, NIS belongs to the category of external feedback control (EFC). The significance of applying EFC in secure communication lies in fact that the trajectory of chaotic systems is disturbed so strongly by external driving signal that phase space reconstruction attack fails. In this paper, however, we propose an approach that can accurately estimate the parameters of the chaotic systems synchronized by external driving signal through chaotic transmitted signal, driving signal and their derivatives. Numerical simulation indicates that this approach can estimate system parameters and external coupling strength under two driving modes in a very rapid manner, which implies that EFC is not superior to other methods in secure communication

  8. A Bayesian framework for parameter estimation in dynamical models.

    Directory of Open Access Journals (Sweden)

    Flávio Codeço Coelho

    Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.

  9. Limitations to the method of power spectrum analysis: Nonstationarity, biased estimators, and weak convergence to normality

    Energy Technology Data Exchange (ETDEWEB)

    Newman, W.I. (California Univ., Los Angeles, CA (USA) Los Alamos National Lab., NM (USA)); Haynes, M.P.; Terzian, Y. (Cornell Univ., Ithaca, NY (USA))

    1991-01-01

    The Power Spectrum Analysis'' method developed by Yu and Peebles has been widely employed as a technique for establishing the existence of periodicities. This method generates a sequence of random numbers from observational data which, it was claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random process preserves a subtle imprint of the original distribution, rendering the derived process nonstationary and producing a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. 22 refs.

  10. On optimal detection and estimation of the FCN parameters

    Science.gov (United States)

    Yatskiv, Y.

    2009-09-01

    Statistical approach for detection and estimation of parameters of short-term quasi- periodic processes was used in order to investigate the Free Core Nutation (FCN) signal in the Celestial Pole Offset (CPO). The results show that this signal is very unstable and that it disappeared in year 2000. The amplitude of oscillation with period of about 435 days is larger for dX as compared with that for dY .

  11. Estimation of Secondary Meteorological Parameters Using Mining Data Techniques

    OpenAIRE

    Rosabel Zerquera Díaz; Ayleen Morales Montejo; Gil Cruz Lemus; Alejandro Rosete Suárez

    2010-01-01

    This work develops a process of Knowledge Discovery in Databases (KDD) at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA) in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes...

  12. Bayesian estimation of parameters in a regional hydrological model

    OpenAIRE

    Engeland, K.; Gottschalk, L.

    2002-01-01

    This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of a likelihood funct...

  13. Bayesian estimation of parameters in a regional hydrological model

    OpenAIRE

    Engeland, K.; Gottschalk, L.

    2002-01-01

    This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of ...

  14. Kinetic parameter estimation from TGA: Optimal design of TGA experiments

    OpenAIRE

    Dirion, Jean-Louis; Reverte, Cédric; Cabassud, Michel

    2008-01-01

    This work presents a general methodology to determine kinetic models of solid thermal decomposition with thermogravimetric analysis (TGA) instruments. The goal is to determine a simple and robust kinetic model for a given solid with the minimum of TGA experiments. From this last point of view, this work can be seen as an attempt to find the optimal design of TGA experiments for kinetic modelling. Two computation tools were developed. The first is a nonlinear parameter estimation procedure for...

  15. Estimation of parameters of interior permanent magnet synchronous motors

    CERN Document Server

    Hwang, C C; Pan, C T; Chang, T Y

    2002-01-01

    This paper presents a magnetic circuit model to the estimation of machine parameters of an interior permanent magnet synchronous machine. It extends the earlier work of Hwang and Cho that focused mainly on the magnetic aspects of motor design. The proposed model used to calculate EMF, d- and q-axis reactances. These calculations are compared to those from finite element analysis and measurement with good agreement.

  16. Parameter estimation for fractional birth and fractional death processes

    OpenAIRE

    Cahoy, Dexter O.; Polito, Federico

    2013-01-01

    The fractional birth and the fractional death processes are more desirable in practice than their classical counterparts as they naturally provide greater flexibility in modeling growing and decreasing systems. In this paper, we propose formal parameter estimation procedures for the fractional Yule, the fractional linear death, and the fractional sublinear death processes. The methods use all available data possible, are computationally simple and asymptotically unbiased. The procedures explo...

  17. Estimation of water diffusivity parameters on grape dynamic drying

    OpenAIRE

    Ramos, Inês N.; Miranda, João M.R.; Brandão, Teresa R. S.; Cristina L.M. Silva

    2010-01-01

    A computer program was developed, aiming at estimating water diffusivity parameters in a dynamic drying process with grapes, assessing the predictability of corresponding non-isothermal drying curves. It numerically solves Fick’s second law for a sphere, by explicit finite differences, in a shrinking system, with anisotropic properties and changing boundary conditions. Experiments were performed in a pilot convective dryer, with simulated air conditions observed in a solar dryer, for modellin...

  18. Iterative importance sampling algorithms for parameter estimation problems

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one approximates a posterior distribution over uncertain param- eters defined jointly by a prior distribution, a numerical model, and noisy data. Typically, Markov Chain Monte Carlo (MCMC) is used for the numerical solution of such problems. An alternative to MCMC is importance sampling, where one draws samples from a proposal distribution, and attaches weights to each sample to account for the fact that the proposal distribution is not the posterior distribut...

  19. Estimation of Parameters in Mean-Reverting Stochastic Systems

    OpenAIRE

    2014-01-01

    Stochastic differential equation (SDE) is a very important mathematical tool to describe complex systems in which noise plays an important role. SDE models have been widely used to study the dynamic properties of various nonlinear systems in biology, engineering, finance, and economics, as well as physical sciences. Since a SDE can generate unlimited numbers of trajectories, it is difficult to estimate model parameters based on experimental observations which may represent only one trajectory...

  20. Marginalized Particle Filters for Bayesian Estimation of Gaussian Noise Parameters

    Czech Academy of Sciences Publication Activity Database

    Saha, S.; Okzan, E.; Gustafsson, F.; Šmídl, Václav

    Edinburgh : IET, 2010, s. 1-8. ISBN 978-0-9824438-1-1. [13th International Conference on Information Fusion. Edinburgh (GB), 26.07.2010-29.07.2010] Institutional research plan: CEZ:AV0Z10750506 Keywords : marginalized particle filter * unknown noise statistics * bayesian conjugate prior Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2010/AS/smidl-marginalized particle filters for bayesian estimation of gaussian noise parameters.pdf

  1. Tracking Biases: An Update to the Validity and Reliability of Alcohol Retail Sales Data for Estimating Population Consumption in Scotland

    OpenAIRE

    Henderson, Audrey; Robinson, Mark; McAdams, Rachel; McCartney, Gerry; Beeston, Clare

    2015-01-01

    Purchase of the sales data was funded by the Scottish Government as part of the wider Monitoring and Evaluating Scotland's Alcohol Strategy portfolio of studies. Funding to pay the Open Access publication charges for this article was provided by NHS Health Scotland. Aims: To highlight the importance of monitoring biases when using retail sales data to estimate population alcohol consumption. Methods: Previously, we identified and where possible quantified sources of bias that may lead to u...

  2. Multi-criteria parameter estimation for the unified land model

    Directory of Open Access Journals (Sweden)

    B. Livneh

    2012-04-01

    Full Text Available We describe a parameter estimation framework for the Unified Land Model (ULM that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET product based on MODerate resolution Imaging Spectroradiometer (MODIS and Geostationary Operation Environmental Satellites (GOES imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR atmospheric fields, terrestrial water storage content (TWSC data from the Gravity Recovery and Climate Experiment (GRACE, and streamflow (Q primarily from the United States Geological Survey (USGS stream gauges. The study domain includes 10 large-scale (≥105 km2 river basins and 250 smaller-scale (<104 km2 tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting model, is the basis for these experiments. Calibrations were made using each of the criteria individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large-scales calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under estimation of low (high flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  3. Multi-criteria parameter estimation for the Unified Land Model

    Directory of Open Access Journals (Sweden)

    B. Livneh

    2012-08-01

    Full Text Available We describe a parameter estimation framework for the Unified Land Model (ULM that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET product based on MODerate resolution Imaging Spectroradiometer (MODIS and Geostationary Operational Environmental Satellites (GOES imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR atmospheric fields, terrestrial water storage content (TWSC data from the Gravity Recovery and Climate Experiment (GRACE, and streamflow (Q primarily from the United States Geological Survey (USGS stream gauges. The study domain includes 10 large-scale (≥105 km2 river basins and 250 smaller-scale (<104 km2 tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting Model, is the basis for these experiments. Calibrations were made using each of the data sets individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large scales, calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under estimation of low (high flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  4. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  5. Estimation of stellar atmospheric parameters from SDSS/SEGUE spectra

    Science.gov (United States)

    Re Fiorentin, P.; Bailer-Jones, C. A. L.; Lee, Y. S.; Beers, T. C.; Sivarani, T.; Wilhelm, R.; Allende Prieto, C.; Norris, J. E.

    2007-06-01

    We present techniques for the estimation of stellar atmospheric parameters (T_eff, log~g, [Fe/H]) for stars from the SDSS/SEGUE survey. The atmospheric parameters are derived from the observed medium-resolution (R = 2000) stellar spectra using non-linear regression models trained either on (1) pre-classified observed data or (2) synthetic stellar spectra. In the first case we use our models to automate and generalize parametrization produced by a preliminary version of the SDSS/SEGUE Spectroscopic Parameter Pipeline (SSPP). In the second case we directly model the mapping between synthetic spectra (derived from Kurucz model atmospheres) and the atmospheric parameters, independently of any intermediate estimates. After training, we apply our models to various samples of SDSS spectra to derive atmospheric parameters, and compare our results with those obtained previously by the SSPP for the same samples. We obtain consistency between the two approaches, with RMS deviations on the order of 150 K in T_eff, 0.35 dex in log~g, and 0.22 dex in [Fe/H]. The models are applied to pre-processed spectra, either via Principal Component Analysis (PCA) or a Wavelength Range Selection (WRS) method, which employs a subset of the full 3850-9000Å spectral range. This is both for computational reasons (robustness and speed), and because it delivers higher accuracy (better generalization of what the models have learned). Broadly speaking, the PCA is demonstrated to deliver more accurate atmospheric parameters when the training data are the actual SDSS spectra with previously estimated parameters, whereas WRS appears superior for the estimation of log~g via synthetic templates, especially for lower signal-to-noise spectra. From a subsample of some 19 000 stars with previous determinations of the atmospheric parameters, the accuracies of our predictions (mean absolute errors) for each parameter are T_eff to 170/170 K, log~g to 0.36/0.45 dex, and [Fe/H] to 0.19/0.26 dex, for methods (1

  6. Estimating hydraulic parameters when poroelastic effects are significant.

    Science.gov (United States)

    Berg, Steven J; Hsieh, Paul A; Illman, Walter A

    2011-01-01

    For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. PMID:21204832

  7. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    Science.gov (United States)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced

  8. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  9. Learn-As-You-Go Acceleration of Cosmological Parameter Estimates

    CERN Document Server

    Aslanyan, Grigor; Price, Layne C

    2015-01-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of $\\Lambda$CDM posterior probabilities. The computation is significantly accelerated wit...

  10. Estimating demographic parameters using hidden process dynamic models.

    Science.gov (United States)

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  11. Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition

    Directory of Open Access Journals (Sweden)

    Yuxing Mao

    2014-06-01

    Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine- invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.

  12. Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems

    DEFF Research Database (Denmark)

    Knudsen, Morten

    for certain input in the time or frequency domain, are emphasised. Consequently, some special techniques are required, in particular for input signal design and model validation. The model structure containing physical parameters is constructed from basic physical laws (mathematical modelling). It is...... possible and essential to utilise this physical insight in the input design and validation procedures. This project has two objectives: 1. To develop and apply theories and techniques that are compatible with physical insight and robust to violation of assumptions and approximations, for system...... sensitivity and the relative parameter variance and confidence ellipsoid is demonstrated. The relation is based on a new theorem on maxima of an ellipsoid. The procedure for input signal design and physical parameter estimation is tested on a number of examples, linear as well as nonlinear and simulated as...

  13. Parameter estimation in a spatial unit root autoregressive model

    CERN Document Server

    Baran, Sándor

    2011-01-01

    Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.

  14. Estimates of genetic parameters for fat yield in Murrah buffaloes

    Directory of Open Access Journals (Sweden)

    Manoj Kumar

    2016-03-01

    Full Text Available Aim: The present study was performed to investigate the effect of genetic and non-genetic factors affecting milk fat yield and to estimate genetic parameters of monthly test day fat yields (MTDFY and lactation 305-day fat yield (L305FY in Murrah buffaloes. Materials and Methods: The data on total of 10381 MTDFY records comprising the first four lactations of 470 Murrah buffaloes calved from 1993 to 2014 were assessed. These buffaloes were sired by 75 bulls maintained in an organized farm at ICAR-National Dairy Research Institute, Karnal. Least squares maximum likelihood program was used to estimate genetic and non-genetic parameters. Heritability estimates were obtained using paternal half-sib correlation method. Genetic and phenotypic correlations among MTDFY, and 305-day fat yield were calculated from the analysis of variance and covariance matrix among sire groups. Results: The overall least squares mean of L305FY was found to be 175.74±4.12 kg. The least squares mean of overall MTDFY ranged from 3.33±0.14 kg (TD-11 to 7.06±0.17 kg (TD-3. The h2 estimate of L305FY was found to be 0.33±0.16 in this study. The estimates of phenotypic and genetic correlations between 305-day fat yield and different MTDFY ranged from 0.32 to 0.48 and 0.51 to 0.99, respectively. Conclusions: In this study, all the genetic and non-genetic factors except age at the first calving group, significantly affected the traits under study. The estimates of phenotypic and genetic correlations of MTDFY with 305-day fat yield was generally higher in the MTDFY-5 of lactation suggesting that this TD yields could be used as the selection criteria for early evaluation and selection of Murrah buffaloes.

  15. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  16. Recursive starlight and bias estimation for high-contrast imaging with an extended Kalman filter

    Science.gov (United States)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.

    2016-01-01

    For imaging faint exoplanets and disks, a coronagraph-equipped observatory needs focal plane wavefront correction to recover high contrast. The most efficient correction methods iteratively estimate the stellar electric field and suppress it with active optics. The estimation requires several images from the science camera per iteration. To maximize the science yield, it is desirable both to have fast wavefront correction and to utilize all the correction images for science target detection. Exoplanets and disks are incoherent with their stars, so a nonlinear estimator is required to estimate both the incoherent intensity and the stellar electric field. Such techniques assume a high level of stability found only on space-based observatories and possibly ground-based telescopes with extreme adaptive optics. In this paper, we implement a nonlinear estimator, the iterated extended Kalman filter (IEKF), to enable fast wavefront correction and a recursive, nearly-optimal estimate of the incoherent light. In Princeton's High Contrast Imaging Laboratory, we demonstrate that the IEKF allows wavefront correction at least as fast as with a Kalman filter and provides the most accurate detection of a faint companion. The nonlinear IEKF formalism allows us to pursue other strategies such as parameter estimation to improve wavefront correction.

  17. Quantifying lost information due to covariance matrix estimation in parameter inference

    CERN Document Server

    Sellentin, Elena

    2016-01-01

    Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing a Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit. We apply our results to DES Science Verification weak lensing data, detecting a 10% loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1%, with an additional unc...

  18. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    Directory of Open Access Journals (Sweden)

    Banga Julio R

    2006-11-01

    Full Text Available Abstract Background We consider the problem of parameter estimation (model calibration in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector. In order to surmount these difficulties, global optimization (GO methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown structure (i.e. black-box models. In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously

  19. Estimating bias from loss to follow-up in a prospective cohort study of bicycle crash injuries

    Science.gov (United States)

    Tin Tin, Sandar; Woodward, Alistair; Ameratunga, Shanthi

    2014-01-01

    Background Loss to follow-up, if related to exposures, confounders and outcomes of interest, may bias association estimates. We estimated the magnitude and direction of such bias in a prospective cohort study of crash injury among cyclists. Methods The Taupo Bicycle Study involved 2590 adult cyclists recruited from New Zealand's largest cycling event in 2006 and followed over a median period of 4.6 years through linkage to four administrative databases. We resurveyed the participants in 2009 and excluded three participants who died prior to the resurvey. We compared baseline characteristics and crash outcomes of the baseline (2006) and follow-up (those who responded in 2009) cohorts by ratios of relative frequencies and estimated potential bias from loss to follow-up on seven exposure-outcome associations of interest by ratios of HRs. Results Of the 2587 cyclists in the baseline cohort, 1526 (60%) responded to the follow-up survey. The responders were older, more educated and more socioeconomically advantaged. They were more experienced cyclists who often rode in a bunch, off-road or in the dark, but were less likely to engage in other risky cycling behaviours. Additionally, they experienced bicycle crashes more frequently during follow-up. The selection bias ranged between −10% and +9% for selected associations. Conclusions Loss to follow-up was differential by demographic, cycling and behavioural risk characteristics as well as crash outcomes, but did not substantially bias association estimates of primary research interest. PMID:24336816

  20. Periodic orbits of hybrid systems and parameter estimation via AD

    International Nuclear Information System (INIS)

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method (GM00, Phi03). Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  1. Periodic orbits of hybrid systems and parameter estimation via AD.

    Energy Technology Data Exchange (ETDEWEB)

    Guckenheimer, John. (Cornell University); Phipps, Eric Todd; Casey, Richard (INRIA Sophia-Antipolis)

    2004-07-01

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  2. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  3. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  4. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  5. Parameter Estimation of Induction Motors Using Water Cycle Optimization

    Directory of Open Access Journals (Sweden)

    M. Yazdani-Asrami

    2013-12-01

    Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams flow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.

  6. Spatial dependence clusters in the estimation of forest structural parameters

    Science.gov (United States)

    Wulder, Michael Albert

    1999-12-01

    In this thesis we provide a summary of the methods by which remote sensing may be applied in forestry, while also acknowledging the various limitations which are faced. The application of spatial statistics to high spatial resolution imagery is explored as a means of increasing the information which may be extracted from digital images. A number of high spatial resolution optical remote sensing satellites that are soon to be launched will increase the availability of imagery for the monitoring of forest structure. This technological advancement is timely as current forest management practices have been altered to reflect the need for sustainable ecosystem level management. The low accuracy level at which forest structural parameters have been estimated in the past is partly due to low image spatial resolution. A large pixel is often composed of a number of surface features, resulting in a spectral value which is due to the reflectance characteristics of all surface features within that pixel. In the case of small pixels, a portion of a surface feature may be represented by a single pixel. When a single pixel represents a portion of a surface object, the potential to isolate distinct surface features exists. Spatial statistics, such as the Gets statistic, provide for an image processing method to isolate distinct surface features. In this thesis, high spatial resolution imagery sensed over a forested landscape is processed with spatial statistics to combine distinct image objects into clusters, representing individual or groups of trees. Tree clusters are a means to deal with the inevitable foliage overlap which occurs within complex mixed and deciduous forest stands. The generation of image objects, that is, clusters, is necessary to deal with the presence of spectrally mixed pixels. The ability to estimate forest inventory and biophysical parameters from image clusters generated from spatially dependent image features is tested in this thesis. The inventory

  7. Propensity score methods for estimating relative risks in cluster randomized trials with low-incidence binary outcomes and selection bias.

    Science.gov (United States)

    Leyrat, Clémence; Caille, Agnès; Donner, Allan; Giraudeau, Bruno

    2014-09-10

    Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)-based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. PMID:24771662

  8. The potential for regional-scale bias in top-down CO2 flux estimates due to atmospheric transport errors

    Directory of Open Access Journals (Sweden)

    S. M. Miller

    2014-09-01

    Full Text Available Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM–LETKF" to quantify two aspects of the transport errors: error variances (standard deviations and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation, depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological

  9. Parameters estimation and measurement of thermophysical properties of liquids

    Energy Technology Data Exchange (ETDEWEB)

    Remy, B.; Degiovanni, A. [Ecole Nationale Superieure et de Mecanique, Univ. Henri Poincare-Nancy 1, Inst. National Polytechnique de Lorraine, Vandoeuvre Les Nancy (France); Lab. d' Energetique et de Mecanique Theorique et Appliquee, Univ. Henri Poincare-Nancy 1, Inst. National Polytechnique de Lorraine, Vandoeuvre Les Nancy (France)

    2005-09-01

    The goal purchased in this paper is to implement an experimental bench allowing the measurement of the thermal diffusivity and conductivity of liquids. The principle of the measurement based on a pulsed method is presented. The entire problem is solved through the thermal quadrupoles method. Then, the parameters estimation problem that is specially difficult in this case due to the presence of the walls of the measurement cell is described and an optimal thickness for these walls is defined from a sensitivity study. Finally, we show how it is possible to take into account the radiative transfer within the fluid in the estimation problem, before presenting the set-up and some experimental results. (author)

  10. Estimating seismic demand parameters using the endurance time method

    Institute of Scientific and Technical Information of China (English)

    Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI

    2011-01-01

    The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.

  11. Observer based parallel IM speed and parameter estimation

    Directory of Open Access Journals (Sweden)

    Skoko Saša

    2014-01-01

    Full Text Available The detailed presentation of modern algorithm for the rotor speed estimation of an induction motor (IM is shown. The algorithm includes parallel speed and resistance parameter estimation and allows a robust shaft-sensorless operation in diverse conditions, including full load and low speed operation with a large thermal drift. The direct connection between the injected electric signal in the d-axis and the component of injected rotor flux were pointed at. The algorithm that has been applied in the paper uses the extracted component of the injected rotor flux in the d-axis from the observer state vector and filtrated measured electricity of one motor phase. By applying the mentioned algorithm, the system converges towards the given reference. [Projekat Ministarstva nauke Republike Srbije, br. III 42004

  12. Optimization-based particle filter for state and parameter estimation

    Institute of Scientific and Technical Information of China (English)

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  13. Energy parameter estimation in solar powered wireless sensor networks

    KAUST Repository

    Mousa, Mustafa

    2014-02-24

    The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms. © Springer International Publishing Switzerland 2014.

  14. Synchronization and parameter estimations of an uncertain Rikitake system

    International Nuclear Information System (INIS)

    In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.

  15. Pedotransfer functions estimating soil hydraulic properties using different soil parameters

    DEFF Research Database (Denmark)

    Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye;

    2008-01-01

    Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... of the hydraulic properties of the studied soils. We found that introducing measured water content as a predictor generally gave lower errors for water retention predictions and higher errors for conductivity predictions. The best of the developed PTFs for predicting hydraulic conductivity was tested against PTFs...

  16. Estimation of the reconstruction parameters for Atom Probe Tomography

    CERN Document Server

    Gault, Baptiste; Stephenson, Leigh T; Moody, Michael P; Muddle, Barry C; Ringer, Simon P

    2015-01-01

    The application of wide field-of-view detection systems to atom probe experiments emphasizes the importance of careful parameter selection in the tomographic reconstruction of the analysed volume, as the sensitivity to errors rises steeply with increases in analysis dimensions. In this paper, a self-consistent method is presented for the systematic determination of the main reconstruction parameters. In the proposed approach, the compression factor and the field factor are determined using geometrical projections from the desorption images. A 3D Fourier transform is then applied to a series of reconstructions and, comparing to the known material crystallography, the efficiency of the detector is estimated. The final results demonstrate a significant improvement in the accuracy of the reconstructed volumes.

  17. Parameter Estimation of Reverse Osmosis Process Model for Desalination

    Directory of Open Access Journals (Sweden)

    Rames C Panda

    2013-10-01

    Full Text Available The present work pertains to modelling and identification of seawater desalination system using reverse osmosis. Initially the manipulated variable (feed pressure and recycle ratio and the measured variables (flowrate, concentration and pH of permeate are identified from reverse osmosis desalination system. The model of reverse osmosis was developed from the first principle approach using the mass balance equation (taking into consideration effect of concentration polarisation from which the transfer function model was developed. The parameters of multi-input multi-output model are identified using the autoregressive exogenous linear identification technique. The states of the process model were also estimated using Kalman filter and parameters are identified by nonlinear least square (NNLS algorithm. The plant’s data of spiral wound model are given as input to all the identification methods. The results obtained from the predicted and the linear models are in good agreement with these obtained for the same plant data.

  18. The basel II risk parameters estimation, validation, and stress testing

    CERN Document Server

    Engelmann, Bernd

    2006-01-01

    In the last decade the banking industry has experienced a significant development in the understanding of credit risk. Refined methods were proposed concerning the estimation of key risk parameters like default probabilities. Further, a large v- ume of literature on the pricing and measurement of credit risk in a portfolio c- text has evolved. This development was partly reflected by supervisors when they agreed on the new revised capital adequacy framework, Basel II. Under Basel II, the level of regulatory capital depends on the risk characteristics of each credit while a portfolio context is

  19. Singularity of Some Software Reliability Models and Parameter Estimation Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.

  20. Parameter estimation using NOON states over a relativistic quantum channel

    OpenAIRE

    Hosler, Dominic; Kok, Pieter

    2013-01-01

    We study the effect of the acceleration of the observer on a parameter estimation protocol using NOON states. An inertial observer, Alice, prepares a NOON state in Unruh modes of the quantum field, and sends it to an accelerated observer, Rob. We calculate the quantum Fisher information of the state received by Rob. We find the counter-intuitive result that the single rail encoding outperforms the dual rail. The NOON states have an optimal $N$ for the maximum information extractable by Rob, g...

  1. Parameter estimation for exponential signals by the quadratic interpolation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, R.C.; Yang, T.Y. [I-Shou Univ., Kaohsiung, Taiwan (China). Dept. of Electrical Engineering; Tsai, J.I. [Kao Yuan Univ., Kaohsiung, Taiwan (China). Dept. of Electronic Engineering; Ou, T.C. [National Sun Yat-Sen Univ., Kaohsiung, Taiwan (China). Dept. of Electrical Engineering

    2008-07-01

    This paper proposed a method to analyze an exponent signal, which can improve the accuracy and convergence of parameter estimation. The method can be used to find the exact frequency, damping, amplitude and phase of molds. It takes a simulated signal and fits it to a real one. The 3 major processes of the method include an initial value setting, a gradient method and quadratic interpolation. In the initial value setting, the mold parameter is analyzed with the two highest amplitudes of each mold, and the precise values can be found. The difference between simulated and practical signals could be expressed as a least mean square problem. The gradient method provides the initial condition to the quadratic interpolation, which in turn can find the optimal solution in a few iterations. The minimum error search is accomplished by the quadratic interpolation, which could improve the search efficiency and reduce iteration time. After a few iterations, the method will obtain the exact harmonic parameters. The method has the advantage of being accurate since the mold parameters are found by least mean square, which makes object functions decrease to a minimum. Since the approximate values are obtained in the initial value setting, this method offers rapid and excellent convergence. 9 refs., 1 tab., 5 figs.

  2. Towards physics responsible for large-scale Lyman-$\\alpha$ forest bias parameters

    OpenAIRE

    Cieplak, Agnieszka M.; Slosar, Anže

    2015-01-01

    Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density ($b_\\delta$) and velocity gradient ($b_\\eta$) biases of the Lyman-$\\alpha$ forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal...

  3. Linear Estimation of Location and Scale Parameters Using Partial Maxima

    CERN Document Server

    Papadatos, Nickos

    2010-01-01

    Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.

  4. Estimation of Secondary Meteorological Parameters Using Mining Data Techniques

    Directory of Open Access Journals (Sweden)

    Rosabel Zerquera Díaz

    2010-10-01

    Full Text Available This work develops a process of Knowledge Discovery in Databases (KDD at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes the CRISP-DM methodology and data analysis tool WEKA. Tasks used: attribute selection and regression, technique: neural network of multilayer perceptron type and algorithms: CfsSubsetEval, BestFirst and MultilayerPerceptron. Estimation models are obtained for secondary meteorological parameters: height of convective mixed layer, height of mechanical mixed layer and convective velocity scale, necessary for the study of patterns of dispersion of pollutants in Cujae's area. The results set a precedent for future research and for the continuity of this in its first stage.

  5. Estimating Friction Parameters in Reaction Wheels for Attitude Control

    Directory of Open Access Journals (Sweden)

    Valdemir Carrara

    2013-01-01

    Full Text Available The ever-increasing use of artificial satellites in both the study of terrestrial and space phenomena demands a search for increasingly accurate and reliable pointing systems. It is common nowadays to employ reaction wheels for attitude control that provide wide range of torque magnitude, high reliability, and little power consumption. However, the bearing friction causes the response of wheel to be nonlinear, which may compromise the stability and precision of the control system as a whole. This work presents a characterization of a typical reaction wheel of 0.65 Nms maximum angular momentum storage, in order to estimate their friction parameters. It used a friction model that takes into account the Coulomb friction, viscous friction, and static friction, according to the Stribeck formulation. The parameters were estimated by means of a nonlinear batch least squares procedure, from data raised experimentally. The results have shown wide agreement with the experimental data and were also close to a deterministic model, previously obtained for this wheel. This model was then employed in a Dynamic Model Compensator (DMC control, which successfully reduced the attitude steady state error of an instrumented one-axis air-bearing table.

  6. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  7. Hidden Markov Models approach used for life parameters estimations

    International Nuclear Information System (INIS)

    In modern electronics and in electrical applications design is very important to be able to predict the actual product life or, at least, to be able to provide the end user with a reasonable estimate of such parameter. It is important to be able to define the availability as a key parameter because, although other performance indicators (as the mean time before failures MTBF or mean time to failure MTTF) exist, they are often misused. To study the availability of an electrical, electronic or an electromechanical system, different methods can be used. The most common one relies on memory-less Markovian state space analysis due to the fact that a little information is needed, and under simple hypothesis, it is possible to gather some outcomes on the availability of steady state value. In this paper the authors, starting from classical approach of Markov models, introduce an extension known as Hidden Markov Models approach to overcome the limits of the previous one in estimating the system availability performance over time. Such a technique can be used to improve the logistic aspects connected with optimal maintenance planning. The provided dissertation in general can be used in different contexts without losing in generality

  8. Flower Pollination Algorithm based solar PV parameter estimation

    International Nuclear Information System (INIS)

    Highlights: • Flower Pollination Algorithm (FPA) is proposed for estimating the parameters of the solar modules. • The performance of the proposed extraction technique is tested using three different sources of data. • The proposed FPA provides the best performance among the other recent techniques. • It is recommended as the fastest and the most accurate optimization technique. - Abstract: Developing a highly accurate simulation technique for Photovoltaic (PV) systems prior to the installation is very important to increase the overall efficiency of using such systems. Providing a more accurate optimization algorithm to extract the optimal parameters of the PV models is therefore continuously required. Flower Pollination Algorithm (FPA) is proposed as a new optimization method to extract the optimal parameters of a single diode and a double diode models. The proposed extraction technique is tested using three different sources of data. The first source is the data reported in the previous literature, while the second source is the experimental data measured at the laboratory. The third source is the experimental data obtained from the data sheets of different types of solar modules. The FPA results are compared with the results of the previous literature to validate the performance of the proposed technique. The results prove that FPA achieves the least error between the extracted and the measured data relative to the other techniques over the entire ranges of different environmental conditions, specially at low irradiation levels. Moreover, FPA outperforms the other techniques from the point of view of both the convergence speed and the convergence time. In addition, comparison of (I–V) characteristics of the extracted parameters by FPA and that of the experimental data shows unnoticed deviation between them. That is why the Flower Pollination Algorithm is recommended as the fastest and the most accurate optimization technique for the optimal parameters

  9. Bayesian Approach in Estimation of Scale Parameter of Nakagami Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-08-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Nakagami distribution is a flexible life time distribution that may offer a good fit to some failure data sets. It has applications in attenuation of wireless signals traversing multiple paths, deriving unit hydrographs in hydrology, medical imaging studies etc. In this research, we obtain Bayesian estimators of the scale parameter of Nakagami distribution. For the posterior distribution of this parameter, we consider Uniform, Inverse Exponential and Levy priors. The three loss functions taken up are Squared Error Loss function, Quadratic Loss Function and Precautionary Loss function. The performance of an estimator is assessed on the basis of its relative posterior risk. Monte Carlo Simulations are used to compare the performance of the estimators. It is discovered that the PLF produces the least posterior risk when uniform priors is used. SELF is the best when inverse exponential and Levy Priors are used. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

  10. Estimation of genetic parameters for reproductive traits in alpacas.

    Science.gov (United States)

    Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P

    2015-12-01

    One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. PMID:26490188

  11. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    Science.gov (United States)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  12. Learn-as-you-go acceleration of cosmological parameter estimates

    Science.gov (United States)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  13. On-line estimation of concentration parameters in fermentation processes

    Institute of Scientific and Technical Information of China (English)

    XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he

    2005-01-01

    It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.

  14. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples ...... restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...... physically realizable; this is obtained by utilizing the geometry of the structures. Determining the shapes, number of structures and proportions for an m-membered ring molecule is of interest, since these quantities determine the chemical properties. The second part of this thesis deals with parameter...

  15. Estimation of Shower Parameters in Wavefront Sampling Technique

    CERN Document Server

    Chitnis, V R

    2001-01-01

    Wavefront sampling experiments record arrival times of \\v{C}erenkov photons with high precision at various locations in \\v{C}erenkov pool using a distributed array of telescopes. It was shown earlier that this photon front can be fitted with a spherical surface traveling at a speed of light and originating from a single point on the shower axis. Radius of curvature of the spherical shower front ($R$) is approximately equal to the height of shower maximum from observation level. For a given primary species, it is also found that $R$ varies with the primary energy ($E$) and this provides a method of estimating the primary energy. In general, one can estimate the arrival times at each telescope using the radius of curvature, arrival direction of the primary and the core location. This, when compared with the data enables us to estimate the above parameters for each shower. This method of obtaining the arrival direction alleviates the difficulty in the form of systematics arising out of the plane wavefront approx...

  16. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  17. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  18. Statistical Parameter Estimation in Ultrasound Backscattering from Tissue Mimicking Media.

    Science.gov (United States)

    Chen, Jian-Feng

    Several tissue characterization parameters, including the effective scatterer number density and the backscatter coefficient, were derived from the statistical properties of ultrasonic echo signals. The effective scatterer number density is the actual scatterer number density in a medium multiplied by a frequency-dependent factor that depends on the differential scattering cross-sections of all scatterers. The method described in this thesis for determining the scatterer number density explicitly retains both the temporal nature of the data acquisition and the properties of the ultrasound field in the data reduction. Moreover, it accounts for the possibility that different sets of scatterers may dominate the echo signal at different frequencies. The random processes involved in forming ultrasound echo signals from random media give rise to an uncertainty in the estimated effective scatterer number density. This uncertainty is evaluated using error propagation. The statistical uncertainty depends on the effective number of scatterers contributing to the segmented echo signal, increasing when the effective number of scatterers increases. Tests of the scatterer number density data reduction method and the statistical uncertainty estimator were done using phantoms with known ultrasound scattering properties. Good agreement was found between measured values and those calculated from first-principles. The properties of the non-Gaussian and non-Rayleigh parameters of ultrasound echo signals are also studied. Both parameters depend on the measurement system, including the transducer field and pulse frequency content, as well as on the medium's properties. The latter is expressed in terms of the scatterer number density and the second and fourth moments of the medium's scattering function. A simple relationship between the non-Gaussian and non-Rayleigh parameters is derived and verified experimentally. Finally, a reference phantom method is proposed for measuring the

  19. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study

    DEFF Research Database (Denmark)

    Wood, L.; Egger, M.; Gluud, L.L.;

    2008-01-01

    OBJECTIVE: To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome. DESIGN: Combined analysis of data from three meta-epidemiological studies based o...

  20. Improving the estimation of fractional-cycle biases for ambiguity resolution in precise point positioning

    Science.gov (United States)

    Geng, Jianghui; Shi, Chuang; Ge, Maorong; Dodson, Alan H.; Lou, Yidong; Zhao, Qile; Liu, Jingnan

    2012-08-01

    Ambiguity resolution dedicated to a single global positioning system (GPS) station can improve the accuracy of precise point positioning. In this process, the estimation accuracy of the narrow-lane fractional-cycle biases (FCBs), which destroy the integer nature of undifferenced ambiguities, is crucial to the ambiguity-fixed positioning accuracy. In this study, we hence propose the improved narrow-lane FCBs derived from an ambiguity-fixed GPS network solution, rather than the original (i.e. previously proposed) FCBs derived from an ambiguity-float network solution. The improved FCBs outperform the original FCBs by ensuring that the resulting ambiguity-fixed daily positions coincide in nature with the state-of-the-art positions generated by the International GNSS Service (IGS). To verify this improvement, 1 year of GPS measurements from about 350 globally distributed stations were processed. We find that the original FCBs differ more from the improved FCBs when fewer stations are involved in the FCB estimation, especially when the number of stations is less than 20. Moreover, when comparing the ambiguity-fixed daily positions with the IGS weekly positions for 248 stations through a Helmert transformation, for the East component, we find that on 359 days of the year the daily RMS of the transformed residuals based on the improved FCBs is smaller by up to 0.8 mm than those based on the original FCBs, and the mean RMS over the year falls evidently from 2.6 to 2.2 mm. Meanwhile, when using the improved rather than the original FCBs, the RMS of the transformed residuals for the East component of 239 stations (i.e. 96.4% of all 248 stations) is clearly reduced by up to 1.6 mm, especially for stations located within a sparse GPS network. Therefore, we suggest that narrow-lane FCBs should be determined with ambiguity-fixed, rather than ambiguity-float, GPS network solutions.

  1. A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.

    Science.gov (United States)

    Shoemaker, David M.

    Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

  2. DriftLess™, an innovative method to estimate and compensate for the biases of inertial sensors

    NARCIS (Netherlands)

    Ruizenaar, M.G.H.; Kemp, R.A.W.

    2014-01-01

    In this paper a method is presented that allows for bias compensation of low-cost MEMS inertial sensors. It is based on the use of two sets of inertial sensors and a rotation mechanism that physically rotates the sensors in an alternating fashion. After signal processing, the biases of both sets of

  3. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  4. Estimation of the Alpha Factor Parameters Using the ICDE Database

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Hwang, M. J.; Han, S. H

    2007-04-15

    Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.

  5. GENETIC AND NON-GENETIC PARAMETER ESTIMATES OF DAIRY CATTLE IN ETHIOPIA: A REVIEW

    Directory of Open Access Journals (Sweden)

    A. TESFA

    2014-07-01

    Full Text Available Ethiopia is endowed with diverse ecosystems inhabited by an abundant diversity of animal, plant and microbial genetic resources due to the availability of diverse agro-ecology. The productivity of any species depends largely on their reproductive performance. Reproduction is an indicator of reproductive efficiency and the rate of genetic progress in both selection and crossbreeding programs. Reproductive performance does not usually refer to a single trait, but to a combination of many traits and is an indicator of reproductive efficiency and the rate of genetic progress. The main indicators of reproductive performance those are reported by many authors are age at first service, age at first calving, calving interval, days open and number of services per conception. The non-genetic factors like sex of calf, season, year, and parity had significant effect on reproductive performance traits. Knowledge on these factors and their influence on cattle performance are important in management and selection decisions. Development of breeding objectives and effective genetic improvement programs require knowledge of the genetic variation among economically important traits and accurate estimates of heritability, repeatability and genetic correlations of these traits. The estimates of genetic parameters are helpful in determining the method of selection to predict direct and correlated response to selection, choosing a breeding system to be adopted for future improvement as well as genetic gains. The reproductive performance of Ethiopian indigenous and exotic breeds producing in the country is low due to various environmental factors and absence of integrated record on the sector that leads a biased result and recommendations of the genetic parameter estimates. Selection and designing of breeding programs for improving the production and productivity of indigenous breed through keeping their native potentials should be based on the results obtained from

  6. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer

    Science.gov (United States)

    Fetterly, Kenneth A.; Favazza, Christopher P.

    2016-08-01

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5–4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6–240 nGy frame‑1 resulted in {{d}\\prime} estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}methods, d\\text{o}\\prime cannot be directly determined independent of d\\text{ns}\\prime . However, methods to estimate d\\text{ns}\\prime independent of d\\text{o}\\prime were developed. In accordance with the theory, d\\text{ns}\\prime was subtracted from experimental estimates of dβ\\prime , providing an unbiased estimate of d\\text{o}\\prime . Estimates of d\\text{o}\\prime exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide d\\text{o}\\prime estimates which are accurate and precise for d\\text{o}\\prime~≥slant ∼ 1.0 . Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  7. Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures

    DEFF Research Database (Denmark)

    Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard

    2013-01-01

    of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the......On-going research is concerned with the losses that occur at junctions in lightweight building structures. Recently the authors have investigated the underlying uncertainties related to both measurement, material and craftsmanship of timber junctions by means of repeated modal testing on a number...... robustness of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re...

  8. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  9. Estimating Phenomenological Parameters in Multi-Assets Markets

    Science.gov (United States)

    Raffaelli, Giacomo; Marsili, Matteo

    Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.

  10. Multiphase flow parameter estimation based on laser scattering

    Science.gov (United States)

    Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.

    2015-07-01

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.

  11. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  12. Enhancing parameter precision of optimal quantum estimation by quantum screening

    Science.gov (United States)

    Jiang, Huang; You-Neng, Guo; Qin, Xie

    2016-02-01

    We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).

  13. Multiphase flow parameter estimation based on laser scattering

    International Nuclear Information System (INIS)

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)

  14. Analysis of Wave Directional Spreading by Bayesian Parameter Estimation

    Institute of Scientific and Technical Information of China (English)

    钱桦; 莊士贤; 高家俊

    2002-01-01

    A spatial array of wave gauges installed on an observatoion platform has been designed and arranged to measure the lo-cal features of winter monsoon directional waves off Taishi coast of Taiwan. A new method, named the Bayesian ParameterEstimation Method( BPEM), is developed and adopted to determine the main direction and the directional spreading parame-ter of directional spectra. The BPEM could be considered as a regression analysis to find the maximum joint probability ofparameters, which best approximates the observed data from the Bayesian viewpoint. The result of the analysis of field wavedata demonstrates the highly dependency of the characteristics of normalized directional spreading on the wave age. The Mit-suyasu type empirical formula of directional spectnun is therefore modified to be representative of monsoon wave field. More-over, it is suggested that Smax could be expressed as a function of wave steepness. The values of Smax decrease with increas-ing steepness. Finally, a local directional spreading model, which is simple to be utilized in engineering practice, is prop-osed.

  15. ESTIMATION OF THE VISCOSITY PARAMETER IN ACCRETION DISKS OF BLAZARS

    International Nuclear Information System (INIS)

    For an optical monitoring blazar sample set whose typical minimum variability timescale is about 1 hr, we estimate a mean value of the viscosity parameter in their accretion disk. We assume that optical variability on timescales of hours is caused by local instabilities in the inner accretion disk. Comparing the observed variability timescales to the thermal timescales of α-disk models, we could obtain constraints on the viscosity parameter (α) and the intrinsic Eddington ratio (Lin/LEdd=m-dot), 0.104 ≤ α ≤ 0.337, and 0.0201 ≤ L in/LEdd ≤ 0.1646. These narrow ranges suggest that all these blazars are observed in a single state, and thus provide a new evidence for the unification of flat-spectrum radio quasars and BL Lacs into a single blazar population. The values of α we derive are consistent with the theoretical expectation α ∼ 0.1-0.3 of Narayan and Mcclintock for advection-dominated accretion flow and are also compatible with Pessah et al.'s predictions (α ≥ 0.1) by numerical simulations in which magnetohydrodynamic turbulence is driven by the saturated magnetorotational instability.

  16. Cosmological parameter estimation and Bayesian model comparison using VSA data

    CERN Document Server

    Slosar, A; Cleary, K; Davies, R D; Davis, R J; Dickinson, C; Genova-Santos, R; Grainge, K; Gutíerrez, C M; Hafez, Y A; Hobson, M P; Jones, M E; Kneissl, R; Lancaster, K; Lasenby, A; Leahy, J P; Maisinger, K; Marshall, P J; Pooley, G G; Rebolo, R; Rubiño-Martín, J A; Rusholme, B A; Saunders, R D E; Savage, R; Scott, P F; Molina, P J S; Taylor, A C; Titterington, D; Waldram, E M; Watson, R A; Wilkinson, A; Slosar, Anze; Carreira, Pedro; Cleary, Kieran; Davies, Rod D.; Davis, Richard J.; Dickinson, Clive; Genova-Santos, Ricardo; Grainge, Keith; Gutierrez, Carlos M.; Hafez, Yaser A.; Hobson, Michael P.; Jones, Michael E.; Kneissl, Rudiger; Lancaster, Katy; Lasenby, Anthony; Maisinger, Klaus; Marshall, Phil J.; Pooley, Guy G.; Rebolo, Rafael; Rubino-Martin, Jose Alberto; Rusholme, Ben; Saunders, Richard D. E.; Savage, Richard; Scott, Paul F.; Molina, Pedro J. Sosa; Taylor, Angela C.; Titterington, David; Waldram, Elizabeth; Watson, Robert A.; Wilkinson, Althea

    2003-01-01

    We constrain the basic comological parameters using the first observations by the Very Small Array (VSA) in its extended configuration, together with existing cosmic microwave background data and other cosmological observations. We estimate cosmological parameters for four different models of increasing complexity. In each case, careful consideration is given to implied priors and the Bayesian evidence is calculated in order to perform model selection. We find that the data are most convincingly explained by a simple flat Lambda-CDM cosmology without tensor modes. In this case, combining just the VSA and COBE data sets yields the 68 per cent confidence intervals Omega_b h^2=0.034 (+0.007, -0.007), Omega_dm h^2 = 0.18 (+0.06, -0.04), h=0.72 (+0.15,-0.13), n_s=1.07 (+0.06,-0.06) and sigma_8=1.17 (+0.25, -0.20). The most general model considered includes spatial curvature, tensor modes, massive neutrinos and a parameterised equation of state for the dark energy. In this case, by combining all recent cosmological...

  17. Hierarchical Monte-Carlo approach to bias estimation for criticality safety calculations - 042

    International Nuclear Information System (INIS)

    We present a hierarchical Monte Carlo method capable of predicting the computational bias for the neutron multiplication factor evaluation within a criticality safety analysis. Bias predictions are based on evaluations of representative sets of criticality experiments taking into account their uncertainties and also correlations between uncertainties of different experiments. The presented procedure relates the keff evaluations for the experiments and their covariance matrix to a computational bias prediction for the application case using trending techniques taking the covariance matrix fully into account. Additionally, we present a method to determine proper sets of explanatory variables needed for the trending procedure. (authors)

  18. Noise-bias compensation in physical-parameter system identification under microtremor input

    OpenAIRE

    Yoshitomi, S.; Takewaki, Izuru

    2009-01-01

    A direct method of physical-parameter system identification (SI) is developed in the case of containing noises at both floors above and below a specified story. To investigate the effect of the level of noise on the accuracy of identification, numerical simulations are performed in the frequency domain by generating two stationary random processes with the specified levels of power spectra. When the previous method of physical-parameter SI is applied to the case contaminated by noise at both ...

  19. Propagation of biases in humidity in the estimation of global irrigation water

    Directory of Open Access Journals (Sweden)

    Y. Masaki

    2015-07-01

    Although different GHMs have different sensitivities to atmospheric humidity because different types of potential evapotranspiration formulae are implemented in them, bias correction of the humidity should be applied to forcing data, particularly for the evaluation of evapotranspiration and irrigation water.

  20. Analysis of burnup credit on spent fuel transport / storage casks - estimation of reactivity bias

    International Nuclear Information System (INIS)

    Chemical analyses of high burnup UO2 (65 GWd/t) and MOX (45 GWd/t) spent fuel pins were carried out. Measured data of nuclides' composition from U234 to P 242 were used for evaluation of ORIGEN-2/82 code and a nuclear fuel design code (NULIF). Critically calculations were executed for transport and storage casks for 52 BWR or 21 PWR spent fuel assemblies. The reactivity biases were evaluated for axial and horizontal profiles of burnup, and historical void fraction (BWR), operational histories such as control rod insertion history, BPR insertion history and others, and calculational accuracy of ORIGEN-2/82 on nuclides' composition. This study shows that introduction of burnup credit has a large merit in criticality safety analysis of casks, even if these reactivity biases are considered. The concept of equivalent uniform burnup was adapted for the present reactivity bias evaluation and showed the possibility of simplifying the reactivity bias evaluation in burnup credit. (authors)

  1. Parameter estimation of the copernicus decompression model with venous gas emboli in human divers.

    Science.gov (United States)

    Gutvik, Christian R; Dunford, Richard G; Dujic, Zeljko; Brubakk, Alf O

    2010-07-01

    Decompression Sickness (DCS) may occur when divers decompress from a hyperbaric environment. To prevent this, decompression procedures are used to get safely back to the surface. The models whose procedures are calculated from, are traditionally validated using clinical symptoms as an endpoint. However, DCS is an uncommon phenomenon and the wide variation in individual response to decompression stress is poorly understood. And generally, using clinical examination alone for validation is disadvantageous from a modeling perspective. Currently, the only objective and quantitative measure of decompression stress is Venous Gas Emboli (VGE), measured by either ultrasonic imaging or Doppler. VGE has been shown to be statistically correlated with DCS, and is now widely used in science to evaluate decompression stress from a dive. Until recently no mathematical model has existed to predict VGE from a dive, which motivated the development of the Copernicus model. The present article compiles a selection experimental dives and field data containing computer recorded depth profiles associated with ultrasound measurements of VGE. It describes a parameter estimation problem to fit the model with these data. A total of 185 square bounce dives from DCIEM, Canada, 188 recreational dives with a mix of single, repetitive and multi-day exposures from DAN USA and 84 experimentally designed decompression dives from Split Croatia were used, giving a total of 457 dives. Five selected parameters in the Copernicus bubble model were assigned for estimation and a non-linear optimization problem was formalized with a weighted least square cost function. A bias factor to the DCIEM chamber dives was also included. A Quasi-Newton algorithm (BFGS) from the TOMLAB numerical package solved the problem which was proved to be convex. With the parameter set presented in this article, Copernicus can be implemented in any programming language to estimate VGE from an air dive. PMID:20414813

  2. ESTIMATION OF PARAMETERS IN STEP-STRESS ACCELERATED LIFE TESTS FOR THE RAYLEIGH DISTRIBUTION UNDER CENSORING SETUP

    Directory of Open Access Journals (Sweden)

    N. Chandra

    2014-12-01

    Full Text Available In this paper, step-stress accelerated life test strategy is considered in obtaining the failure time data of the highly reliable items or units or equipment in a specified period of time. It is assumed that life time data of such items follows a Rayleigh distribution with a scale parameter (θ which is the log linear function of the stress levels. The maximum likelihood estimates (MLEs of the scale parameters ( i θ at both the stress levels (s , i = 2,1 i are obtained under a cumulative exposure model. A simulation study is performed to assess the precision of the MLEs on the basis of mean square error (MSE and relative absolute bias (RABias. The coverage probabilities of approximate and bootstrap confidence intervals for the parameters involved under both the censoring setup are numerically examined. In addition to this, asymptotic variance and covariance matrix of the estimators are also presented.

  3. Estimation of the refractive index structure parameter from single-level daytime routine weather data.

    Science.gov (United States)

    van de Boer, A; Moene, A F; Graf, A; Simmer, C; Holtslag, A A M

    2014-09-10

    Atmospheric scintillations cause difficulties for applications where an undistorted propagation of electromagnetic radiation is essential. These scintillations are related to turbulent fluctuations of temperature and humidity that are in turn related to surface heat fluxes. We developed an approach that quantifies these scintillations by estimating C(n(2)) from surface fluxes that are derived from single-level routine weather data. In contrast to previous methods that are biased to dry and warm air, our method is directly applicable to several land surface types, environmental conditions, wavelengths, and measurement heights (lookup tables for a limited number of site-specific parameters are provided). The approach allows for an efficient evaluation of the performance of, e.g., infrared imaging systems, laser geodetic systems, and ground-to-satellite optical communication systems. We tested our approach for two grass fields in central and southern Europe, and for a wheat field in central Europe. Although there are uncertainties in the flux estimates, the impact on C(n(2)) is shown to be rather small. The C(n(2)) daytime estimates agree well with values determined from eddy covariance measurements for the application to the three fields. However, some adjustments were needed for the approach for the grass in southern Europe because of non-negligible boundary-layer processes that occur in addition to surface-layer processes. PMID:25321675

  4. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it...

  5. Modeling and parameter estimation for hydraulic system of excavator's arm

    Institute of Scientific and Technical Information of China (English)

    HE Qing-hua; HAO Peng; ZHANG Da-qing

    2008-01-01

    A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.

  6. Estimation of parameters of K-meson structure functions

    International Nuclear Information System (INIS)

    On the basis of multiparton recombination model with the use of the Kuti-Weisskopf parametrization there have been analyzed the available experimental data on inclusive spectra of the vector and tensor mesons in the reactions K±p → MX (M=ρ, φ, K(890), K(1430) in the kaon fragmentation region at high energies (32-110 GeV/c) with the aim to extract the parameters of the K-meson structure functions. For the suppression factor of the kaon strange sea the value λs=0.18±0.01 is obtained. The kaon longitudinal momentum fraction carried away by nonstrange valence quarks and sea partons respectively are NV>=0.17, SV>=0.30 and S>=0.53. Estimates are obtained for the summary longitudinal momentum fractions carried away by nonstrange sea quark-antiquark pairs NS>=0.23±0.06, strange sea quark-antiquark pairs SS>=0.02±0.01 and gluons G>=0.28±0.09. 26 refs.; 4 figs.; 1 tab

  7. Anaerobic biodegradability of fish remains: experimental investigation and parameter estimation.

    Science.gov (United States)

    Donoso-Bravo, Andres; Bindels, Francoise; Gerin, Patrick A; Vande Wouwer, Alain

    2015-01-01

    The generation of organic waste associated with aquaculture fish processing has increased significantly in recent decades. The objective of this study is to evaluate the anaerobic biodegradability of several fish processing fractions, as well as water treatment sludge, for tilapia and sturgeon species cultured in recirculated aquaculture systems. After substrate characterization, the ultimate biodegradability and the hydrolytic rate were estimated by fitting a first-order kinetic model with the biogas production profiles. In general, the first-order model was able to reproduce the biogas profiles properly with a high correlation coefficient. In the case of tilapia, the skin/fin, viscera, head and flesh presented a high level of biodegradability, above 310 mLCH₄gCOD⁻¹, whereas the head and bones showed a low hydrolytic rate. For sturgeon, the results for all fractions were quite similar in terms of both parameters, although viscera presented the lowest values. Both the substrate characterization and the kinetic analysis of the anaerobic degradation may be used as design criteria for implementing anaerobic digestion in a recirculating aquaculture system. PMID:25812103

  8. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer

    Science.gov (United States)

    Fetterly, Kenneth A.; Favazza, Christopher P.

    2016-08-01

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5–4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6–240 nGy frame‑1 resulted in {{d}\\prime} estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  9. Improving documentation and coding for acute organ dysfunction biases estimates of changing sepsis severity and burden: a retrospective study

    OpenAIRE

    Rhee, Chanu; Murphy, Michael V.; Li, Lingling; Platt, Richard; Klompas, Michael; ,

    2015-01-01

    Introduction Claims-based analyses report that the incidence of sepsis-associated organ dysfunction is increasing. We examined whether coding practices for acute organ dysfunction are changing over time and if so, whether this is biasing estimates of rising severe sepsis incidence and severity. Methods We assessed trends from 2005 to 2013 in the annual sensitivity and incidence of discharge ICD-9-CM codes for organ dysfunction (shock, respiratory failure, acute kidney failure, acidosis, hepat...

  10. Improving documentation and coding for acute organ dysfunction biases estimates of changing sepsis severity and burden: a retrospective study

    OpenAIRE

    Rhee, Chanu; Murphy, Michael V.; Li, Lingling; Platt, Richard; Klompas, Michael

    2015-01-01

    Introduction: Claims-based analyses report that the incidence of sepsis-associated organ dysfunction is increasing. We examined whether coding practices for acute organ dysfunction are changing over time and if so, whether this is biasing estimates of rising severe sepsis incidence and severity. Methods: We assessed trends from 2005 to 2013 in the annual sensitivity and incidence of discharge ICD-9-CM codes for organ dysfunction (shock, respiratory failure, acute kidney failure, acidosis, hep...

  11. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool

    OpenAIRE

    Zi, Z.; Klipp, E.

    2006-01-01

    The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of...

  12. Age dependent sampling biases in tsetse flies (Glossina): Problems associated with estimating mortality from sample age distributions

    International Nuclear Information System (INIS)

    For a closed (island) population of G. morsitans morsitans Westwood, the probability per week of capturing females on ox fly rounds was about 0.3 in the first week of life, less than 0.2 for 27 to 35-d-old flies and greater than 0.4 for flies more than 80 d old. For open populations, the relative changes in capture probability were measured from the ovarian age distributions of trap and ox fly round samples. They were used (with island data) to show that the age dependent sampling bias of traps for female G. m. morsitans increased more than sixfold over the first 80 d of life. The age dependent bias for G. pallidipes Austen taken from odour baited traps is probably at least as serious as for G. m. morsitans. Estimates of daily mortality from the mark-recapture studies were always (up to 20 times) higher than estimates from ovarian age samples taken at the same times. The mortalities recalculated from samples adjusted for sampling biases were closer to, but still lower than, the mark-recapture estimates. Odour baited targets are successful in controlling tsetse populations, despite the relatively low probability of treating young females. If sterilants instead of insecticides were used on the targets, young females could be treated indirectly via treated males, which transfer the sterilant to virgin females during copulation. (author). 15 refs, 2 figs

  13. On Parameters Estimation of Lomax Distribution under General Progressive Censoring

    Directory of Open Access Journals (Sweden)

    Bander Al-Zahrani

    2013-01-01

    Full Text Available We consider the estimation problem of the probability S=P(Yestimator and Bayes estimators are obtained using the symmetric and asymmetric balanced loss functions. The Markov chain Monte Carlo (MCMC methods are used to accomplish some complex calculations. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation study.

  14. "Sample-Independent" Item Parameters? An Investigation of the Stability of IRT Item Parameters Estimated from Small Data Sets.

    Science.gov (United States)

    Sireci, Stephen G.

    Whether item response theory (IRT) is useful to the small-scale testing practitioner is examined. The stability of IRT item parameters is evaluated with respect to the classical item parameters (i.e., p-values, biserials) obtained from the same data set. Previous research investigating the effect of sample size on IRT parameter estimation has…

  15. Improving the global SST record: estimates of biases from engine room intake SST using high quality satellite data

    Science.gov (United States)

    Carella, Giulia; Kent, Elizabeth C.; Berry, David I.; Morak-Bozzo, Simone; Merchant, Christopher J.

    2016-04-01

    Sea Surface Temperature (SST) is the marine component of the global surface temperature record, a primary metric of climate change. SST observations from ships form one of the longest instrumental records of surface marine climate. However, over the years several different methods of measuring SST have been used, each with different bias characteristics. The estimation of systematic biases in the SST record is critical for climatic decadal predictions, and uncertainties in long-term trends are expected to be dominated by uncertainties in biases introduced by changes of instrumentation and measurement practices. Although the largest systematic errors in SST observations relate to the period before about 1940, where SST measurements were mostly made using buckets, there are also issues with modern data, in particular when the SST reported is the temperature of the engine-room cooling water intake (ERI). Physical models for biases in ERI SSTs have not been developed as the details of the individual setup on each ship are extremely important, and almost always unknown. Existing studies estimate that the typical ERI biases are around 0.2°C and most estimates of the mean bias fall between 0.1°C and 0.3°C, but there is some evidence of much larger differences. However, these analyses provide only broad estimates, being based only on subsamples of the data and ignoring ship-by-ship differences. Here we take advantage of a new, high spatial resolution, gap-filled, daily SST for the period 1992-2010 from the European Space Agency Climate Change Initiative (ESA CCI) for SST dataset version 1.1. In this study, we use a Bayesian statistical model to characterise the uncertainty in reports of ERI SST for individual ships using the ESA CCI SST as a reference. A Bayesian spatial analysis is used to model the differences of the observed SST from the ESA CCI SST for each ship as a constant offset plus a function of the climatological SST. This was found to be an important term

  16. Variational methods to estimate terrestrial ecosystem model parameters

    Science.gov (United States)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  17. Effect of indium low doping in ZnO based TFTs on electrical parameters and bias stress stability

    Energy Technology Data Exchange (ETDEWEB)

    Cheremisin, Alexander B., E-mail: acher612@gmail.com; Kuznetsov, Sergey N.; Stefanovich, Genrikh B. [Physico-Technical Department, Petrozavodsk State University, Petrozavodsk 185910 (Russian Federation)

    2015-11-15

    Some applications of thin film transistors (TFTs) need the bottom-gate architecture and unpassivated channel backside. We propose a simple routine to fabricate indium doped ZnO-based TFT with satisfactory characteristics and acceptable stability against a bias stress in ambient room air. To this end, a channel layer of 15 nm in thickness was deposited on cold substrate by DC reactive magnetron co-sputtering of metal Zn-In target. It is demonstrated that the increase of In concentration in ZnO matrix up to 5% leads to negative threshold voltage (V{sub T}) shift and an increase of field effect mobility (μ) and a decrease of subthreshold swing (SS). When dopant concentration reaches the upper level of 5% the best TFT parameters are achieved such as V{sub T} = 3.6 V, μ = 15.2 cm{sup 2}/V s, SS = 0.5 V/dec. The TFTs operate in enhancement mode exhibiting high turn on/turn off current ratio more than 10{sup 6}. It is shown that the oxidative post-fabrication annealing at 250{sup o}C in pure oxygen and next ageing in dry air for several hours provide highly stable operational characteristics under negative and positive bias stresses despite open channel backside. A possible cause of this effect is discussed.

  18. Effect of indium low doping in ZnO based TFTs on electrical parameters and bias stress stability

    International Nuclear Information System (INIS)

    Some applications of thin film transistors (TFTs) need the bottom-gate architecture and unpassivated channel backside. We propose a simple routine to fabricate indium doped ZnO-based TFT with satisfactory characteristics and acceptable stability against a bias stress in ambient room air. To this end, a channel layer of 15 nm in thickness was deposited on cold substrate by DC reactive magnetron co-sputtering of metal Zn-In target. It is demonstrated that the increase of In concentration in ZnO matrix up to 5% leads to negative threshold voltage (VT) shift and an increase of field effect mobility (μ) and a decrease of subthreshold swing (SS). When dopant concentration reaches the upper level of 5% the best TFT parameters are achieved such as VT = 3.6 V, μ = 15.2 cm2/V s, SS = 0.5 V/dec. The TFTs operate in enhancement mode exhibiting high turn on/turn off current ratio more than 106. It is shown that the oxidative post-fabrication annealing at 250oC in pure oxygen and next ageing in dry air for several hours provide highly stable operational characteristics under negative and positive bias stresses despite open channel backside. A possible cause of this effect is discussed

  19. Effect of indium low doping in ZnO based TFTs on electrical parameters and bias stress stability

    Science.gov (United States)

    Cheremisin, Alexander B.; Kuznetsov, Sergey N.; Stefanovich, Genrikh B.

    2015-11-01

    Some applications of thin film transistors (TFTs) need the bottom-gate architecture and unpassivated channel backside. We propose a simple routine to fabricate indium doped ZnO-based TFT with satisfactory characteristics and acceptable stability against a bias stress in ambient room air. To this end, a channel layer of 15 nm in thickness was deposited on cold substrate by DC reactive magnetron co-sputtering of metal Zn-In target. It is demonstrated that the increase of In concentration in ZnO matrix up to 5% leads to negative threshold voltage (VT) shift and an increase of field effect mobility (μ) and a decrease of subthreshold swing (SS). When dopant concentration reaches the upper level of 5% the best TFT parameters are achieved such as VT = 3.6 V, μ = 15.2 cm2/V s, SS = 0.5 V/dec. The TFTs operate in enhancement mode exhibiting high turn on/turn off current ratio more than 106. It is shown that the oxidative post-fabrication annealing at 250oC in pure oxygen and next ageing in dry air for several hours provide highly stable operational characteristics under negative and positive bias stresses despite open channel backside. A possible cause of this effect is discussed.

  20. A bootstrap method for estimating bias and variance in statistical fisheries modelling frameworks using highly disparate datasets

    OpenAIRE

    Elvarsson, B. P.; Taylor, L.; Trenkel, Verena; Kupca, V.; Stefansson, G.

    2014-01-01

    Statistical models of marine ecosystems use a variety of data sources to estimate parameters using composite or weighted likelihood functions with associated weighting issues and questions on how to obtain variance estimates. Regardless of the method used to obtain point estimates, a method is required for variance estimation. A bootstrap technique is introduced for the evaluation of uncertainty in such models, taking into account inherent spatial and temporal correlations in the datasets, wh...

  1. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    Science.gov (United States)

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  2. Uncertainty of Modal Parameters Estimated by ARMA Models

    DEFF Research Database (Denmark)

    Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders

    1990-01-01

    In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the paramete...

  3. The influence of contrasting suspended particulate matter transport regimes on the bias and precision of flux estimates.

    Science.gov (United States)

    Moatar, Florentina; Person, Gwenaelle; Meybeck, Michel; Coynel, Alexandra; Etcheber, Henri; Crouzet, Philippe

    2006-11-01

    A large database (507 station-years) of daily suspended particulate matter (SPM) concentration and discharge data from 36 stations on river basins ranging from 600 km(2) to 600,000 km(2) in size (USA and Europe) was collected to assess the effects of SPM transport regime on bias and imprecision of flux estimates when using infrequent surveys and the discharge-weighted mean concentration method. By extracting individual SPM concentrations and corresponding discharge values from the database, sampling frequencies from 12 to 200 per year were simulated using Monte Carlo techniques. The resulting estimates of yearly SPM fluxes were compared to reference fluxes derived from the complete database. For each station and given frequency, bias was measured by the median of relative errors between estimated and reference fluxes, and imprecision by the difference between the upper and lower deciles of relative errors. Results show that the SPM transport regime of rivers affects the bias and imprecision of fluxes estimated by the discharge-weighted mean concentration method for given sampling frequencies (e.g. weekly, bimonthly, monthly). The percentage of annual SPM flux discharged in 2% of time (Ms(2)) is a robust indicator of SPM transport regime directly related to bias and imprecision. These errors are linked to the Ms(2) indicator for various sampling frequencies within a specific nomograph. For instance, based on a deviation of simulated flux estimates from reference fluxes lower than +/-20% and a bias lower than 1% or 2%, the required sampling intervals are less than 3 days for rivers with Ms(2) greater than 40% (basin size<10,000 km(2)), between 3 and 5 days for rivers with Ms(2) between 30 and 40% (basin size between 10,000 and 50,000 km(2)), between 5 and 12 days for Ms(2) from 20% to 30% (basin size between 50,000 and 200,000 km(2)), 12-20 days for Ms(2) in the 15-20% range (basin size between 200,000 and 500,000 km(2)). PMID:16949650

  4. Parameter estimation and determinability analysis applied to Drosophila gap gene circuits

    NARCIS (Netherlands)

    Ashyraliyev, M.; Jaeger, J.; Blom, J.G.

    2008-01-01

    Background

    Mathematical modeling of real-life processes often requires the estimation of unknown parameters. Once the parameters are found by means of optimization, it is important to assess the quality of the parameter estimates, especially if parameter values are used to draw biological c

  5. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    Directory of Open Access Journals (Sweden)

    Shuguo Pan

    2015-07-01

    Full Text Available Satellite orbit error and clock bias are the keys to precise point positioning (PPP. The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS, a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can

  6. Re-constructing historical Adélie penguin abundance estimates by retrospectively accounting for detection bias.

    Science.gov (United States)

    Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul

    2015-01-01

    Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals. PMID:25909636

  7. Examination of the Parameter Estimate Bias When Violating the Orthogonality Assumption of the Bifactor Model

    Science.gov (United States)

    Zheng, Chunmei

    2013-01-01

    Educational and psychological constructs are normally measured by multifaceted dimensions. The measured construct is defined and measured by a set of related subdomains. A bifactor model can accurately describe such data with both the measured construct and the related subdomains. However, a limitation of the bifactor model is the orthogonality…

  8. Biased Parameter Estimation for LDA%LDA模型参数有偏估计方法

    Institute of Scientific and Technical Information of China (English)

    袁伯秋; 周一民; 李林

    2010-01-01

    LDA(Latent Dirichlet Allocation)等基于隐含topic的模型在离散数据处理中的应用逐渐增多.然而LDA使用Dirichlet分布作为隐含topic的分布函数,未能很好表示各topic之间相互关系.目前常见改进方法是通过DAG(Directed Acyclic Graph)图或对数正态分布等其他分布函数表达topic之间的关系.本文通过参数有偏估计的方法,考虑topic混合过程中词项上的重叠关系,改变topic内部词项分布,最终改进LDA模型性能.在回顾一些基础内容后,重点介绍参数有偏估计及简化计算方法.最后通过LDA模型在信息检索中的实验验证这种改进的有效性,并初步分析模型参数选用规律.

  9. New ranked set sampling for estimating the population parameters

    OpenAIRE

    Zamanzade, Ehsan; Al-Omari, Amer Ibrahim

    2014-01-01

    In this paper, a new modification of ranked set sampling (RSS) is suggested, namely; unified ranked set sampling (URSS) for estimating the population mean and variance. The performance of the empirical mean and variance estimators based on URSS are compared with their counterparts in ranked set sampling and simple random sampling (SRS) via Monte Carlo simulation. Simulation results indicate that the URSS estimators perform better than their counterparts using RSS and SRS designs when the rank...

  10. Estimating atmospheric parameters and reducing noise for multispectral imaging

    Science.gov (United States)

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  11. In-orbit offline estimation of the residual magnetic dipole biases of the POPSAT-HIP1 nanosatellite

    Science.gov (United States)

    Seriani, S.; Brama, Y. L.; Gallina, P.; Manzoni, G.

    2016-05-01

    The nanosatellite POPSAT-HIP1 is a Cubesat-class spacecraft launched on the 19th of June 2014 to test cold-gas based micro-thrusters; it is, as of April 2015, in a low Earth orbit at around 600 km of altitude and is equipped, notably, with a magnetometer. In order to increment the performance of the attitude control of nanosatellites like POPSAT, it is extremely useful to determine the main biases that act on the magnetometer while in orbit, for example those generated by the residual magnetic moment of the satellite itself and those originating from the transmitter. Thus, we present a methodology to perform an in-orbit offline estimation of the magnetometer bias caused by the residual magnetic moment of the satellite (we refer to this as the residual magnetic dipole bias, or RMDB). The method is based on a genetic algorithm coupled with a simplex algorithm, and provides the bias RMDB vector as output, requiring solely the magnetometer readings. This is exploited to compute the transmitter magnetic dipole bias (TMDB), by comparing the computed RMDB with the transmitter operating and idling. An experimental investigation is carried out by acquiring the magnetometer outputs in different phases of the spacecraft life (stabilized, maneuvering, free tumble). Results show remarkable accuracy with an RMDB orientation error between 3.6 ° and 6.2 ° , and a module error around 7 % . TMDB values show similar coherence values. Finally, we note some drawbacks of the methodologies, as well as some possible improvements, e.g. precise transmitter operations logging. In general, however, the methodology proves to be quite effective even with sparse and noisy data, and promises to be incisive in the improvement of attitude control systems.

  12. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  13. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  14. Improved estimates of the benefits of breastfeeding using sibling comparisons to reduce selection bias

    OpenAIRE

    Reilly, Siobhan; Evenhouse, Eirik

    2005-01-01

    Objective Better measurement of the health and cognitive benefits of breastfeeding by using sibling comparisons to reduce sample selection bias. Data We use data on the breastfeeding history, physical and emotional health, academic performance, cognitive ability, and demographic characteristics of 16,903 adolescents from the first (1994) wave of the National Longitudinal Study of Adolescent Health. The sample includes 2,734 sibling pairs. Study Design We examine the relation...

  15. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    OpenAIRE

    Zhang Zhang; Li Jun; Cui Peng; Ding Feng; Li Ang; Townsend Jeffrey P; Yu Jun

    2012-01-01

    Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not...

  16. On a Class of Bias-Amplifying Variables that Endanger Effect Estimates

    OpenAIRE

    Pearl, Judea

    2012-01-01

    This note deals with a class of variables that, if conditioned on, tends to amplify confounding bias in the analysis of causal effects. This class, independently discovered by Bhattacharya and Vogt (2007) and Wooldridge (2009), includes instrumental variables and variables that have greater influence on treatment selection than on the outcome. We offer a simple derivation and an intuitive explanation of this phenomenon and then extend the analysis to non linear models. We show that: 1. the bi...

  17. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity.

    Directory of Open Access Journals (Sweden)

    Nicolas Heslot

    Full Text Available Genome-wide molecular markers are often being used to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorphisms in the population under study. Ascertainment bias arises when marker data is not obtained from a random sample of the polymorphisms in the population of interest. Genotyping-by-sequencing (GBS is rapidly emerging as a low-cost genotyping platform, even for the large, complex, and polyploid wheat (Triticum aestivum L. genome. With GBS, marker discovery and genotyping occur simultaneously, resulting in minimal ascertainment bias. The previous platform of choice for whole-genome genotyping in many species such as wheat was DArT (Diversity Array Technology and has formed the basis of most of our knowledge about cereals genetic diversity. This study compared GBS and DArT marker platforms for measuring genetic diversity and genomic selection (GS accuracy in elite U.S. soft winter wheat. From a set of 365 breeding lines, 38,412 single nucleotide polymorphism GBS markers were discovered and genotyped. The GBS SNPs gave a higher GS accuracy than 1,544 DArT markers on the same lines, despite 43.9% missing data. Using a bootstrap approach, we observed significantly more clustering of markers and ascertainment bias with DArT relative to GBS. The minor allele frequency distribution of GBS markers had a deficit of rare variants compared to DArT markers. Despite the ascertainment bias of the DArT markers, GS accuracy for three traits out of four was not significantly different when an equal number of markers were used for each platform. This suggests that the gain in accuracy observed using GBS compared to DArT markers was mainly due to a large increase in the number of markers available for the analysis.

  18. Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

    Directory of Open Access Journals (Sweden)

    Xiu Kan

    2012-01-01

    Full Text Available The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter θ:dXt=(θα(t+β(tXtdt+σ(tdWt. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter θ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

  19. Asymptotic Parameter Estimation for a Class of Linear Stochastic Systems Using Kalman-Bucy Filtering

    OpenAIRE

    Xiu Kan; Huisheng Shu; Yan Che

    2012-01-01

    The asymptotic parameter estimation is investigated for a class of linear stochastic systems with unknown parameter θ:dXt=(θα(t)+β(t)Xt)dt+σ(t)dWt. Continuous-time Kalman-Bucy linear filtering theory is first used to estimate the unknown parameter θ based on Bayesian analysis. Then, some sufficient conditions on coefficients are given to analyze the asymptotic convergence of the estimator. Finally, the strong consistent property of the estimator is discussed by comparison theorem.

  20. Asymptotically Median Unbiased Estimation of Coefficient Variance in a Time Varying Parameter Model

    OpenAIRE

    Stock, James H.; Mark W. Watson

    1996-01-01

    This paper considers the estimation of the variance of coefficients in time varying parameter models with stationary regressors. The maximum likelihood estimator has large point mass at zero. We therefore develop asymptotically median unbiased estimators and confidence intervals by inverting median functions of regression-based parameter stability test statistics, computed under the constant-parameter null. These estimators have good asymptotic relative efficiencies for small to moderate amou...