Bias in parameter estimation of form errors
Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min
2014-09-01
The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.
Adaptive Unified Biased Estimators of Parameters in Linear Model
Institute of Scientific and Technical Information of China (English)
Hu Yang; Li-xing Zhu
2004-01-01
To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.
Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun
2016-09-01
Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.
Observable Priors: Limiting Biases in Estimated Parameters for Incomplete Orbits
Kosmo, Kelly; Martinez, Gregory; Hees, Aurelien; Witzel, Gunther; Ghez, Andrea M.; Do, Tuan; Sitarski, Breann; Chu, Devin; Dehghanfar, Arezu
2017-01-01
Over twenty years of monitoring stellar orbits at the Galactic center has provided an unprecedented opportunity to study the physics and astrophysics of the supermassive black hole (SMBH) at the center of the Milky Way Galaxy. In order to constrain the mass of and distance to the black hole, and to evaluate its gravitational influence on orbiting bodies, we use Bayesian statistics to infer black hole and stellar orbital parameters from astrometric and radial velocity measurements of stars orbiting the central SMBH. Unfortunately, most of the short period stars in the Galactic center have periods much longer than our twenty year time baseline of observations, resulting in incomplete orbital phase coverage--potentially biasing fitted parameters. Using the Bayesian statistical framework, we evaluate biases in the black hole and orbital parameters of stars with varying phase coverage, using various prior models to fit the data. We present evidence that incomplete phase coverage of an orbit causes prior assumptions to bias statistical quantities, and propose a solution to reduce these biases for orbits with low phase coverage. The explored solution assumes uniformity in the observables rather than in the inferred model parameters, as is the current standard method of orbit fitting. Of the cases tested, priors that assume uniform astrometric and radial velocity observables reduce the biases in the estimated parameters. The proposed method will not only improve orbital estimates of stars orbiting the central SMBH, but can also be extended to other orbiting bodies with low phase coverage such as visual binaries and exoplanets.
Basic MR sequence parameters systematically bias automated brain volume estimation
Energy Technology Data Exchange (ETDEWEB)
Haller, Sven [University of Geneva, Faculty of Medicine, Geneva (Switzerland); Affidea Centre de Diagnostique Radiologique de Carouge CDRC, Geneva (Switzerland); Falkovskiy, Pavel; Roche, Alexis; Marechal, Benedicte [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Meuli, Reto [University Hospital (CHUV), Department of Radiology, Lausanne (Switzerland); Thiran, Jean-Philippe [LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Krueger, Gunnar [Siemens Medical Solutions USA, Inc., Boston, MA (United States); Lovblad, Karl-Olof [University of Geneva, Faculty of Medicine, Geneva (Switzerland); University Hospitals of Geneva, Geneva (Switzerland); Kober, Tobias [Siemens Healthcare HC CEMEA SUI DI BM PI, Advanced Clinical Imaging Technology, Lausanne (Switzerland); LTS5, Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2016-11-15
Automated brain MRI morphometry, including hippocampal volumetry for Alzheimer disease, is increasingly recognized as a biomarker. Consequently, a rapidly increasing number of software tools have become available. We tested whether modifications of simple MR protocol parameters typically used in clinical routine systematically bias automated brain MRI segmentation results. The study was approved by the local ethical committee and included 20 consecutive patients (13 females, mean age 75.8 ± 13.8 years) undergoing clinical brain MRI at 1.5 T for workup of cognitive decline. We compared three 3D T1 magnetization prepared rapid gradient echo (MPRAGE) sequences with the following parameter settings: ADNI-2 1.2 mm iso-voxel, no image filtering, LOCAL- 1.0 mm iso-voxel no image filtering, LOCAL+ 1.0 mm iso-voxel with image edge enhancement. Brain segmentation was performed by two different and established analysis tools, FreeSurfer and MorphoBox, using standard parameters. Spatial resolution (1.0 versus 1.2 mm iso-voxel) and modification in contrast resulted in relative estimated volume difference of up to 4.28 % (p < 0.001) in cortical gray matter and 4.16 % (p < 0.01) in hippocampus. Image data filtering resulted in estimated volume difference of up to 5.48 % (p < 0.05) in cortical gray matter. A simple change of MR parameters, notably spatial resolution, contrast, and filtering, may systematically bias results of automated brain MRI morphometry of up to 4-5 %. This is in the same range as early disease-related brain volume alterations, for example, in Alzheimer disease. Automated brain segmentation software packages should therefore require strict MR parameter selection or include compensatory algorithms to avoid MR parameter-related bias of brain morphometry results. (orig.)
BIASED BEARINGS-ONIKY PARAMETER ESTIMATION FOR BISTATIC SYSTEM
Institute of Scientific and Technical Information of China (English)
Xu Benlian; Wang Zhiquan
2007-01-01
According to the biased angles provided by the bistatic sensors,the necessary condition of observability and Cramer-Rao low bounds for the bistatic system are derived and analyzed,respectively.Additionally,a dual Kalman filter method is presented with the purpose of eliminating the effect of biased angles on the state variable estimation.Finally,Monte-Carlo simulations are conducted in the observable scenario.Simulation results show that the proposed theory holds true,and the dual Kalman filter method can estimate state variable and biased angles simultaneously.Furthermore,the estimated results can achieve their Cramer-Rao low bounds.
Model parameter estimation bias induced by earthquake magnitude cut-off
Harte, D. S.
2016-02-01
We evaluate the bias in parameter estimates of the ETAS model. We show that when a simulated catalogue is magnitude-truncated there is considerable bias, whereas when it is not truncated there is no discernible bias. We also discuss two further implied assumptions in the ETAS and other self-exciting models. First, that the triggering boundary magnitude is equivalent to the catalogue completeness magnitude. Secondly, the assumption in the Gutenberg-Richter relationship that numbers of events increase exponentially as magnitude decreases. These two assumptions are confounded with the magnitude truncation effect. We discuss the effect of these problems on analyses of real earthquake catalogues.
Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight
Busti, V C; Clarkson, C
2013-01-01
Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales - which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbation theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the...
Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight
Energy Technology Data Exchange (ETDEWEB)
Busti, V.C.; Clarkson, C. [Astrophysics, Cosmology and Gravity Center (ACGC), and Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701, Cape Town (South Africa); Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za [Departamento de Física, Universidade Estadual da Paraíba, 58429-500, Campina Grande – PB (Brazil)
2013-11-01
Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbation theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set...
Schäfer, Björn Malte; Kalovidouris, Angelos Fotios; Heisenberg, Lavinia
2011-09-01
The subject of this paper is an investigation of the non-linear contributions to the spectrum of the integrated Sachs-Wolfe (iSW) effect. We derive the corrections to the iSW autospectrum and the iSW-tracer cross-spectrum consistently to third order in perturbation theory and analyse the cumulative signal-to-noise ratio for a cross-correlation between the Planck and Euclid data sets as a function of multipole order. We quantify the parameter sensitivity and the statistical error bounds on the cosmological parameters Ωm, σ8, h, ns and w from the linear iSW effect and the systematical parameter estimation bias due to the non-linear corrections in a Fisher formalism, analysing the error budget in its dependence on multipole order. Our results include the following: (i) the spectrum of the non-linear iSW effect can be measured with 0.8σ statistical significance, (ii) non-linear corrections dominate the spectrum starting from ℓ≃ 102, (iii) an anticorrelation of the CMB temperature with tracer density on high multipoles in the non-linear regime, (iv) a much weaker dependence of the non-linear effect on the dark energy model compared to the linear iSW effect and (v) parameter estimation biases amount to less than 0.1σ and weaker than other systematics.
Rau, Markus Michael; Paech, Kerstin; Seitz, Stella
2016-01-01
Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by common suboptimal binning techniques and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to $6 \\, \\sigma$ for the dark energy equation of state parameter $w$. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalised resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surve...
How serious can the stealth bias be in gravitational wave parameter estimation?
Vitale, Salvatore
2013-01-01
The upcoming direct detection of gravitational waves will open a window to probing the strong-field regime of general relativity (GR). As a consequence, waveforms that include the presence of deviations from GR have been developed (e.g. in the parametrized post-Einsteinian approach). TIGER, a data analysis pipeline which builds Bayesian evidence to support or question the validity of GR, has been written and tested. In particular, it was shown recently that data from the LIGO and Virgo detectors will allow to detect deviations from GR smaller than can be probed with Solar System tests and pulsar timing measurements or not accessible with conventional tests of GR. However, evidence from several detections is required before a deviation from GR can be confidently claimed. An interesting consequence is that, should GR not be the correct theory of gravity in its strong field regime, using standard GR templates for the matched filter analysis of interferometer data will introduce biases in the gravitational wave m...
Rau, Markus Michael; Hoyle, Ben; Paech, Kerstin; Seitz, Stella
2017-04-01
Photometric redshift uncertainties are a major source of systematic error for ongoing and future photometric surveys. We study different sources of redshift error caused by choosing a suboptimal redshift histogram bin width and propose methods to resolve them. The selection of a too large bin width is shown to oversmooth small-scale structure of the radial distribution of galaxies. This systematic error can significantly shift cosmological parameter constraints by up to 6σ for the dark energy equation-of-state parameter w. Careful selection of bin width can reduce this systematic by a factor of up to 6 as compared with commonly used current binning approaches. We further discuss a generalized resampling method that can correct systematic and statistical errors in cosmological parameter constraints caused by uncertainties in the redshift distribution. This can be achieved without any prior assumptions about the shape of the distribution or the form of the redshift error. Our methodology allows photometric surveys to obtain unbiased cosmological parameter constraints using a minimum number of spectroscopic calibration data. For a DES-like galaxy clustering forecast, we obtain unbiased results with respect to errors caused by suboptimal histogram bin width selection, using only 5k representative spectroscopic calibration objects per tomographic redshift bin.
Schaefer, Bjoern Malte; Heisenberg, Lavinia
2010-01-01
The subject of this paper is an investigation of the nonlinear contributions to the spectrum of the integrated Sachs-Wolfe (iSW) effect. We derive the corrections to the iSW-auto spectrum and the iSW-tracer cross-spectrum consistently to third order in perturbation theory and analyse the cumulative signal-to-noise ratio for a cross-correlation between the PLANCK and EUCLID data sets as a function of multipole order. We quantify the parameter sensitivity and the statistical error bounds on the cosmological parameters Omega_m, sigma_8, h, n_s and w from the linear iSW-effect and the systematical parameter estimation bias due to the nonlinear corrections in a Fisher-formalism, analysing the error budget in its dependence on multipole order. Our results include: (i) the spectrum of the nonlinear iSW-effect can be measured with 0.8\\sigma statistical significance, (ii) nonlinear corrections dominate the spectrum starting from l=100, (iii) an anticorrelation of the CMB temperature with tracer density on high multipo...
DEFF Research Database (Denmark)
Sadiq, Muhammad; Tscherning, Carl C.; Ahmad, Zulfiqar
2009-01-01
This paper deals with the analysis of gravity anomaly and precise levelling in conjunction with GPS-Levelling data for the computation of a gravimetric geoid and an estimate of the height system bias parameter N-o for the vertical datum in Pakistan by means of least squares collocation technique...... covariance parameters has facilitated to achieve gravimetric height anomalies in a global geocentric datum. Residual terrain modeling (RTM) technique has been used in combination with the EGM96 for the reduction and smoothing of the gravity data. A value for the bias parameter N-o has been estimated...... with reference to the local GPS-Levelling datum that appears to be 0.705 m with 0.07 m mean square error. The gravimetric height anomalies were compared with height anomalies obtained from GPS-Levelling stations using least square collocation with and without bias adjustment. The bias adjustment minimizes...
Recursive bias estimation for high dimensional smoothers
Energy Technology Data Exchange (ETDEWEB)
Hengartner, Nicolas W [Los Alamos National Laboratory; Matzner-lober, Eric [UHB, FRANCE; Cornillon, Pierre - Andre [INRA
2008-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
The estimation method of GPS instrumental biases
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A model of estimating the global positioning system (GPS) instrumental biases and the methods to calculate the relative instrumental biases of satellite and receiver are presented. The calculated results of GPS instrumental biases, the relative instrumental biases of satellite and receiver, and total electron content (TEC) are also shown. Finally, the stability of GPS instrumental biases as well as that of satellite and receiver instrumental biases are evaluated, indicating that they are very stable during a period of two months and a half.
A prescription for galaxy biasing evolution as a nuisance parameter
Clerkin, L.; Kirk, D.; Lahav, O.; Abdalla, F. B.; Gaztañaga, E.
2015-04-01
There is currently no consistent approach to modelling galaxy bias evolution in cosmological inference. This lack of a common standard makes the rigorous comparison or combination of probes difficult. We show that the choice of biasing model has a significant impact on cosmological parameter constraints for a survey such as the Dark Energy Survey (DES), considering the two-point correlations of galaxies in five tomographic redshift bins. We find that modelling galaxy bias with a free biasing parameter per redshift bin gives a Figure of Merit (FoM) for dark energy equation of state parameters w0, wa smaller by a factor of 10 than if a constant bias is assumed. An incorrect bias model will also cause a shift in measured values of cosmological parameters. Motivated by these points and focusing on the redshift evolution of linear bias, we propose the use of a generalized galaxy bias which encompasses a range of bias models from theory, observations and simulations, b(z) = c + (b0 - c)/D(z)α, where parameters c, b0 and α depend on galaxy properties such as halo mass. For a DES-like galaxy survey, we find that this model gives an unbiased estimate of w0, wa with the same number or fewer nuisance parameters and a higher FoM than a simple b(z) model allowed to vary in z-bins. We show how the parameters of this model are correlated with cosmological parameters. We fit a range of bias models to two recent data sets, and conclude that this generalized parametrization is a sensible benchmark expression of galaxy bias on large scales.
Optomechanical parameter estimation
Ang, Shan Zheng; Bowen, Warwick P; Tsang, Mankei
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cram\\'er-Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation-maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cram\\'er-Rao bound most closely. With its ability to estimate most of the system parameters, the EM algorithm is envisioned to be useful for optomechanical sensing, atomic magnetometry, and classical or quantum system identification applications in general.
Statistical framework for estimating GNSS bias
Vierinen, Juha; Rideout, William C; Erickson, Philip J; Norberg, Johannes
2015-01-01
We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line integrated electron densities (TEC) that are scaled to equivalent vertical integrated densities. The spatio-temporal variability, instrumentation dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual frequency GNSS s...
Two biased estimation techniques in linear regression: Application to aircraft
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Parameter Estimation Through Ignorance
Du, Hailiang
2015-01-01
Dynamical modelling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A new relatively simple method of parameter estimation for nonlinear systems is presented, based on variations in the accuracy of probability forecasts. It is illustrated on the Logistic Map, the Henon Map and the 12-D Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The new method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This new approach is easier to implement in practice than alter...
Revisiting Cosmological parameter estimation
Prasad, Jayanti
2014-01-01
Constraining theoretical models with measuring the parameters of those from cosmic microwave background (CMB) anisotropy data is one of the most active areas in cosmology. WMAP, Planck and other recent experiments have shown that the six parameters standard $\\Lambda$CDM cosmological model still best fits the data. Bayesian methods based on Markov-Chain Monte Carlo (MCMC) sampling have been playing leading role in parameter estimation from CMB data. In one of the recent studies \\cite{2012PhRvD..85l3008P} we have shown that particle swarm optimization (PSO) which is a population based search procedure can also be effectively used to find the cosmological parameters which are best fit to the WMAP seven year data. In the present work we show that PSO not only can find the best-fit point, it can also sample the parameter space quite effectively, to the extent that we can use the same analysis pipeline to process PSO sampled points which is used to process the points sampled by Markov Chains, and get consistent res...
Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Bias-reduced estimation of long memory stochastic volatility
DEFF Research Database (Denmark)
Frederiksen, Per; Nielsen, Morten Ørregaard
We propose to use a variant of the local polynomial Whittle estimator to estimate the memory parameter in volatility for long memory stochastic volatility models with potential nonstation- arity in the volatility process. We show that the estimator is asymptotically normal and capable of obtaining...... bias reduction as well as a rate of convergence arbitrarily close to the parametric rate, n1=2. A Monte Carlo study is conducted to support the theoretical results, and an analysis of daily exchange rates demonstrates the empirical usefulness of the estimators....
Estimation of attitude sensor timetag biases
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Institute of Scientific and Technical Information of China (English)
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
Noise Induces Biased Estimation of the Correction Gain
Ahn, Jooeun; Zhang, Zhaoran; Sternad, Dagmar
2016-01-01
The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker) and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis. PMID:27463809
Noise Induces Biased Estimation of the Correction Gain.
Directory of Open Access Journals (Sweden)
Jooeun Ahn
Full Text Available The detection of an error in the motor output and the correction in the next movement are critical components of any form of motor learning. Accordingly, a variety of iterative learning models have assumed that a fraction of the error is adjusted in the next trial. This critical fraction, the correction gain, learning rate, or feedback gain, has been frequently estimated via least-square regression of the obtained data set. Such data contain not only the inevitable noise from motor execution, but also noise from measurement. It is generally assumed that this noise averages out with large data sets and does not affect the parameter estimation. This study demonstrates that this is not the case and that in the presence of noise the conventional estimate of the correction gain has a significant bias, even with the simplest model. Furthermore, this bias does not decrease with increasing length of the data set. This study reveals this limitation of current system identification methods and proposes a new method that overcomes this limitation. We derive an analytical form of the bias from a simple regression method (Yule-Walker and develop an improved identification method. This bias is discussed as one of other examples for how the dynamics of noise can introduce significant distortions in data analysis.
Recursive bias estimation for high dimensional regression smoothers
Energy Technology Data Exchange (ETDEWEB)
Hengartner, Nicolas W [Los Alamos National Laboratory; Cornillon, Pierre - Andre [AGROSUP, FRANCE; Matzner - Lober, Eric [UNIV OF RENNES, FRANCE
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Errors on errors - Estimating cosmological parameter covariance
Joachimi, Benjamin
2014-01-01
Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.
PARAMETER ESTIMATION OF EXPONENTIAL DISTRIBUTION
Institute of Scientific and Technical Information of China (English)
XU Haiyan; FEI Heliang
2005-01-01
Because of the importance of grouped data, many scholars have been devoted to the study of this kind of data. But, few documents have been concerned with the threshold parameter. In this paper, we assume that the threshold parameter is smaller than the first observing point. Then, on the basis of the two-parameter exponential distribution, the maximum likelihood estimations of both parameters are given, the sufficient and necessary conditions for their existence and uniqueness are argued, and the asymptotic properties of the estimations are also presented, according to which approximate confidence intervals of the parameters are derived. At the same time, the estimation of the parameters is generalized, and some methods are introduced to get explicit expressions of these generalized estimations. Also, a special case where the first failure time of the units is observed is considered.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
A Prescription for Galaxy Biasing Evolution as a Nuisance Parameter
Clerkin, L; Lahav, O; Abdalla, F B; Gaztanaga, E
2014-01-01
There is currently no consistent approach to modelling galaxy bias evolution in cosmological inference. This lack of a common standard makes the rigorous comparison or combination of probes difficult. We show that the choice of biasing model has a significant impact on cosmological parameter constraints for a survey such as the Dark Energy Survey (DES), considering the 2-point correlations of galaxies in five tomographic redshift bins. We find that modelling galaxy bias with a free biasing parameter per redshift bin gives a Figure of Merit (FoM) for Dark Energy equation of state parameters $w_0, w_a$ smaller by a factor of 10 than if a constant bias is assumed. An incorrect bias model will also cause a shift in measured values of cosmological parameters. Motivated by these points and focusing on the redshift evolution of linear bias, we propose the use of a generalised galaxy bias which encompasses a range of bias models from theory, observations and simulations, $b(z) = c + (b_0 - c)/D(z)^\\alpha$, where $c, ...
Parameters estimation in quantum optics
D'Ariano, G M; Sacchi, M F; Paris, Matteo G. A.; Sacchi, Massimiliano F.
2000-01-01
We address several estimation problems in quantum optics by means of the maximum-likelihood principle. We consider Gaussian state estimation and the determination of the coupling parameters of quadratic Hamiltonians. Moreover, we analyze different schemes of phase-shift estimation. Finally, the absolute estimation of the quantum efficiency of both linear and avalanche photodetectors is studied. In all the considered applications, the Gaussian bound on statistical errors is attained with a few thousand data.
Toward unbiased estimations of the statefinder parameters
Aviles, Alejandro; Luongo, Orlando
2016-01-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cos...
A bias identification and state estimation methodology for nonlinear systems
Caglayan, A. K.; Lancraft, R. E.
1983-01-01
A computational algorithm for the identification of input and output biases in discrete-time nonlinear stochastic systems is derived by extending the separate bias estimation results for linear systems to the extended Kalman filter formulation. The merits of the approach are illustrated by identifying instrument biases using a terminal configured vehicle simulation.
Photo-z Estimation: An Example of Nonparametric Conditional Density Estimation under Selection Bias
Izbicki, Rafael; Freeman, Peter E
2016-01-01
Redshift is a key quantity for inferring cosmological model parameters. In photometric redshift estimation, cosmologists use the coarse data collected from the vast majority of galaxies to predict the redshift of individual galaxies. To properly quantify the uncertainty in the predictions, however, one needs to go beyond standard regression and instead estimate the full conditional density f(z|x) of a galaxy's redshift z given its photometric covariates x. The problem is further complicated by selection bias: usually only the rarest and brightest galaxies have known redshifts, and these galaxies have characteristics and measured covariates that do not necessarily match those of more numerous and dimmer galaxies of unknown redshift. Unfortunately, there is not much research on how to best estimate complex multivariate densities in such settings. Here we describe a general framework for properly constructing and assessing nonparametric conditional density estimators under selection bias, and for combining two o...
Interval Estimation of Seismic Hazard Parameters
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2016-11-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Bayesian parameter estimation for effective field theories
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Bayesian parameter estimation for effective field theories
Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A
2015-01-01
We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Recursive bias estimation and L2 boosting
Energy Technology Data Exchange (ETDEWEB)
Hengartner, Nicolas W [Los Alamos National Laboratory; Cornillon, Pierre - Andre [INRA, FRANCE; Matzner - Lober, Eric [RENNE, FRANCE
2009-01-01
This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.
A New Bias Corrected Version of Heteroscedasticity Consistent Covariance Estimator
Directory of Open Access Journals (Sweden)
Munir Ahmed
2016-06-01
Full Text Available In the presence of heteroscedasticity, different available flavours of the heteroscedasticity consistent covariance estimator (HCCME are used. However, the available literature shows that these estimators can be considerably biased in small samples. Cribari–Neto et al. (2000 introduce a bias adjustment mechanism and give the modified White estimator that becomes almost bias-free even in small samples. Extending these results, Cribari-Neto and Galvão (2003 present a similar bias adjustment mechanism that can be applied to a wide class of HCCMEs’. In the present article, we follow the same mechanism as proposed by Cribari-Neto and Galvão to give bias-correction version of HCCME but we use adaptive HCCME rather than the conventional HCCME. The Monte Carlo study is used to evaluate the performance of our proposed estimators.
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Parameter Estimation Using VLA Data
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
Yatracos, Yannis G.
2013-01-01
The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...
Load Estimation from Modal Parameters
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández;
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF...... matrix assembled from modal parameters and the experimental responses recorded using standard sensors, is presented. The method implies the inversion of the FRF which, in general, is not full rank matrix due to the truncation of the modal space. Furthermore, some ecommendations are included to improve...
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
a set of tools and the techniques necessary to estimate the kinetic and stoichiometric parameters for wastewater treatment processes using data obtained from experimental batch activity tests. These methods and tools are mainly intended for practical applications, i.e. by consultants...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......). Models have also been used as an integral part of the comprehensive analysis and interpretation of data obtained from a range of experimental methods from the laboratory, as well as pilot-scale studies to characterise and study wastewater treatment plants. In this regard, models help to properly explain...
Mode choice model parameters estimation
Strnad, Irena
2010-01-01
The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...
Bias-corrected estimation of stable tail dependence function
DEFF Research Database (Denmark)
Beirlant, Jan; Escobar-Bach, Mikael; Goegebeur, Yuri
2016-01-01
We consider the estimation of the stable tail dependence function. We propose a bias-corrected estimator and we establish its asymptotic behaviour under suitable assumptions. The finite sample performance of the proposed estimator is evaluated by means of an extensive simulation study where...
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Directory of Open Access Journals (Sweden)
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Uncertainty relation based on unbiased parameter estimations
Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing
2017-02-01
Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
A Class of Biased Estimators Besed on SVD in Linear Model
Institute of Scientific and Technical Information of China (English)
GUIQing-ming; DUANQing-tang; GUOJian-feng; ZHOUQiao-yun
2003-01-01
In this paper,a class of new biased estimators for linear model is proposed by modifying the singular values of the design matrix so as to directly overcome the difficulties caused by ill-conditioning in the design matrix.Some important properties of these new estimators are obtained.By appropriate choices of the biased parameters,we construct many useful and important estimators.An application of these new estimators in three-dimensional position adjustment by distance in a spatial coordiate surveys is given.The results show that the proposed biased estimators can effectively overcome ill-conditioning and their numerical stabilities are preferable to ordinary least square estimation.
Estimates of bias and uncertainty in recorded external dose
Energy Technology Data Exchange (ETDEWEB)
Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.
1994-10-01
A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements.
Evaluation of P2-C2 bias estimation
Santos, M. C.; van der Bree, R.; van der Marel, H.; Verhagen, S.; Garcia, C. A.
2010-12-01
The availability of the second civilian code C2 created a new issue to be considered: the bias relating P2 and C2 signals. Such an issue is important when merging C2-capable and legacy receivers and when processing data collected by a C2-capable receiver with satellite clock values generated using a legacy receiver network. The P2-C2 bias is essentially a consequence from the fact that receiver and satellite hardware delays for C2 measurements may not be necessarily the same of those for P2. Knowing this bias makes possible the use of C2 as an observable for positioning using IGS clock products. We are using the PPP-based approach for P2-C2 bias estimation developed at the University of New Brunswick. For that purpose, we use GAPS, the GPS Analysis and Positioning Software. We also determine P2-C2 bias directly from the code observations. This poster presents and discusses the evaluation of P2-C2 values estimated from a sub-set of the IGS L2C Test Network. The values are applied to observations collected by C2-capable receivers in the point positioning mode. Coordinate repeatability indicate an improvement of up to 50% when using the P2-C2 bias.
Weak Lensing Peak Finding: Estimators, Filters, and Biases
Schmidt, Fabian
2010-01-01
Large catalogs of shear-selected peaks have recently become a reality. In order to properly interpret the abundance and properties of these peaks, it is necessary to take into account the effects of the clustering of source galaxies, among themselves and with the lens. In addition, the preferred selection of lensed galaxies in a flux- and size-limited sample leads to fluctuations in the apparent source density which correlate with the lensing field (lensing bias). In this paper, we investigate these issues for two different choices of shear estimators which are commonly in use today: globally-normalized and locally-normalized estimators. While in principle equivalent, in practice these estimators respond differently to systematic effects such as lensing bias and cluster member dilution. Furthermore, we find that which estimator is statistically superior depends on the specific shape of the filter employed for peak finding; suboptimal choices of the estimator+filter combination can result in a suppression of t...
Quantifying and controlling biases in dark matter halo concentration estimates
Poveda-Ruiz, C N; Muñoz-Cuartas, J C
2016-01-01
We use bootstrapping to estimate the bias of concentration estimates on N-body dark matter halos as a function of particle number. We find that algorithms based on the maximum radial velocity and radial particle binning tend to overestimate the concentration by 15%-20% for halos sampled with 200 particles and by 7% - 10% for halos sampled with 500 particles. To control this bias at low particle numbers we propose a new algorithm that estimates halo concentrations based on the integrated mass profile. The method uses the full particle information without any binning, making it reliable in cases when low numerical resolution becomes a limitation for other methods. This method reduces the bias to less than 3% for halos sampled with 200-500 particles. The velocity and density methods have to use halos with at least 4000 particles in order to keep the biases down to the same low level. We also show that the mass-concentration relationship could be shallower than expected once the biases of the different concentrat...
A Method for Estimating BeiDou Inter-frequency Satellite Clock Bias
Directory of Open Access Journals (Sweden)
LI Haojun
2016-02-01
Full Text Available A new method for estimating the BeiDou inter-frequency satellite clock bias is proposed, considering the shortage of the current methods. The constant and variable parts of the inter-frequency satellite clock bias are considered in the new method. The data from 10 observation stations are processed to validate the new method. The characterizations of the BeiDou inter-frequency satellite clock bias are also analyzed using the computed results. The results of the BeiDou inter-frequency satellite clock bias indicate that it is stable in the short term. The estimated BeiDou inter-frequency satellite clock bias results are molded. The model results show that the 10 parameters of model for each satellite can express the BeiDou inter-frequency satellite clock bias well and the accuracy reaches cm level. When the model parameters of the first day are used to compute the BeiDou inter-frequency satellite clock bias of the second day, the accuracy also reaches cm level. Based on the stability and modeling, a strategy for the BeiDou satellite clock service is presented to provide the reference of our BeiDou.
Application of chaotic theory to parameter estimation
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
High precision parameter estimation is very important for control system design and compensation. This paper utilizes the properties of chaotic system for parameter estimation. Theoretical analysis and experimental results indicated that this method has extremely high sensitivity and resolving power. The most important contribution of this paper is apart from the traditional engineering viewpoint and actualizing parameter estimation just based on unstable chaotic systems.
Maximum-likelihood fits to histograms for improved parameter estimation
Fowler, Joseph W
2013-01-01
Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Improving uncertainty estimation in urban hydrological modeling by statistically describing bias
Directory of Open Access Journals (Sweden)
D. Del Giudice
2013-10-01
Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and relatively efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described with a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias, and measurement errors. In our case study, the best performing bias description is the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to address the causes of model discrepancies. Further research should focus on
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
Parameter estimation and reliable fault detection of electric motors
Institute of Scientific and Technical Information of China (English)
Dusan PROGOVAC; Le Yi WANG; George YIN
2014-01-01
Accurate model identification and fault detection are necessary for reliable motor control. Motor-characterizing parameters experience substantial changes due to aging, motor operating conditions, and faults. Consequently, motor parameters must be estimated accurately and reliably during operation. Based on enhanced model structures of electric motors that accommodate both normal and faulty modes, this paper introduces bias-corrected least-squares (LS) estimation algorithms that incorporate functions for correcting estimation bias, forgetting factors for capturing sudden faults, and recursive structures for efficient real-time implementation. Permanent magnet motors are used as a benchmark type for concrete algorithm development and evaluation. Algorithms are presented, their properties are established, and their accuracy and robustness are evaluated by simulation case studies under both normal operations and inter-turn winding faults. Implementation issues from different motor control schemes are also discussed.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
Estimated time of arrival and debiasing the time saving bias.
Eriksson, Gabriella; Patten, Christopher J D; Svenson, Ola; Eriksson, Lars
2015-01-01
The time saving bias predicts that the time saved when increasing speed from a high speed is overestimated, and underestimated when increasing speed from a slow speed. In a questionnaire, time saving judgements were investigated when information of estimated time to arrival was provided. In an active driving task, an alternative meter indicating the inverted speed was used to debias judgements. The simulated task was to first drive a distance at a given speed, and then drive the same distance again at the speed the driver judged was required to gain exactly 3 min in travel time compared with the first drive. A control group performed the same task with a speedometer and saved less than the targeted 3 min when increasing speed from a high speed, and more than 3 min when increasing from a low speed. Participants in the alternative meter condition were closer to the target. The two studies corroborate a time saving bias and show that biased intuitive judgements can be debiased by displaying the inverted speed. Practitioner Summary: Previous studies have shown a cognitive bias in judgements of the time saved by increasing speed. This simulator study aims to improve driver judgements by introducing a speedometer indicating the inverted speed in active driving. The results show that the bias can be reduced by presenting the inverted speed and this finding can be used when designing in-car information systems.
PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL
Institute of Scientific and Technical Information of China (English)
钱炜祺; 蔡金狮
2001-01-01
A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Mitra, Sanjit; Tabitha Paulson, Sonu
2015-03-01
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the PLANCK likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window functions obtained via Monte Carlo simulations starting from a fiducial spectrum, as implemented in PLANCK analyses for example, leads to only few percent of sigma deviation of the best-fit parameters. However, we notice more significant differences in the posterior distributions for some of the parameters, which would in turn lead to incorrect errorbars. These differences can be reduced, so that the errorbars match within few percent, by adding an iterative reanalysis step, where the beam window function would be recomputed using the best-fit spectrum estimated in the first step.
Earth Rotation Parameter Estimation by GPS Observations
Institute of Scientific and Technical Information of China (English)
YAO Yibin
2006-01-01
The methods of Earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in detail. There are two different ways to estimate ERP: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. By comparing the estimated results with independent copyright program to IERS results, the residual systemic error can be found in estimated ERP with GPS observations.
Parameter Estimation in Multivariate Gamma Distribution
Directory of Open Access Journals (Sweden)
V S Vaidyanathan
2015-05-01
Full Text Available Multivariate gamma distribution finds abundant applications in stochastic modelling, hydrology and reliability. Parameter estimation in this distribution is a challenging one as it involves many parameters to be estimated simultaneously. In this paper, the form of multivariate gamma distribution proposed by Mathai and Moschopoulos [10] is considered. This form has nice properties in terms of marginal and conditional densities. A new method of estimation based on optimal search is proposed for estimating the parameters using the marginal distributions and the concepts of maximum likelihood, spacings and least squares. The proposed methodology is easy to implement and is free from calculus. It optimizes the objective function by searching over a wide range of values and determines the estimate of the parameters. The consistency of the estimates is demonstrated in terms of mean, standard deviation and mean square error through simulation studies for different choices of parameters.
Directory of Open Access Journals (Sweden)
Lash Timothy L
2007-11-01
Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a
Estimation of physical parameters in induction motors
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
Postprocessing MPEG based on estimated quantization parameters
DEFF Research Database (Denmark)
Forchhammer, Søren
2009-01-01
the case where the coded stream is not accessible, or from an architectural point of view not desirable to use, and instead estimate some of the MPEG stream parameters based on the decoded sequence. The I-frames are detected and the quantization parameters are estimated from the coded stream and used...
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Improving uncertainty estimation in urban hydrological modeling by statistically describing bias
Directory of Open Access Journals (Sweden)
D. Del Giudice
2013-04-01
Full Text Available Hydrodynamic models are useful tools for urban water management. Unfortunately, it is still challenging to obtain accurate results and plausible uncertainty estimates when using these models. In particular, with the currently applied statistical techniques, flow predictions are usually overconfident and biased. In this study, we present a flexible and computationally efficient methodology (i to obtain more reliable hydrological simulations in terms of coverage of validation data by the uncertainty bands and (ii to separate prediction uncertainty into its components. Our approach acknowledges that urban drainage predictions are biased. This is mostly due to input errors and structural deficits of the model. We address this issue by describing model bias in a Bayesian framework. The bias becomes an autoregressive term additional to white measurement noise, the only error type accounted for in traditional uncertainty analysis in urban hydrology. To allow for bigger discrepancies during wet weather, we make the variance of bias dependent on the input (rainfall or/and output (runoff of the system. Specifically, we present a structured approach to select, among five variants, the optimal bias description for a given urban or natural case study. We tested the methodology in a small monitored stormwater system described by means of a parsimonious model. Our results clearly show that flow simulations are much more reliable when bias is accounted for than when it is neglected. Furthermore, our probabilistic predictions can discriminate between three uncertainty contributions: parametric uncertainty, bias (due to input and structural errors, and measurement errors. In our case study, the best performing bias description was the output-dependent bias using a log-sinh transformation of data and model results. The limitations of the framework presented are some ambiguity due to the subjective choice of priors for bias parameters and its inability to directly
Cosmological parameter estimation using Particle Swarm Optimization
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Non-response bias in physical activity trend estimates
Directory of Open Access Journals (Sweden)
Bauman Adrian
2009-11-01
Full Text Available Abstract Background Increases in reported leisure time physical activity (PA and obesity have been observed in several countries. One hypothesis for these apparently contradictory trends is differential bias in estimates over time. The purpose of this short report is to examine the potential impact of changes in response rates over time on the prevalence of adequate PA in Canadian adults. Methods Participants were recruited in representative national telephone surveys of PA from 1995-2007. Differences in PA prevalence estimates between participants and those hard to reach were assessed using Student's t tests adjusted for multiple comparisons. Results The number of telephone calls required to reach and speak with someone in the household increased over time, as did the percentage of selected participants who initially refused during the first interview attempt. A higher prevalence of adequate PA was observed with 5-9 attempts to reach anyone in the household in 1999-2002, but this was not significant after adjustment for multiple comparisons. Conclusion No significant impact on PA trend estimates was observed due to differential non response rates. It is important for health policy makers to understand potential biases and how these may affect secular trends in all aspects of the energy balance equation.
Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach
Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio
2015-01-01
This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
An enhanced algorithm to estimate BDS satellite's differential code biases
Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei
2016-02-01
This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L; Newberg, Heidi Jo; Beers, Timothy C; Chen, Li; Deng, Licai; Guhathakurta, Puragra; Hou, Jinliang; Hou, Yonghui; Lepine, Sebastien; Li, Guangwei; Luo, A-Li; Smith, Martin C; Wu, Yue; Yang, Ming; Yanny, Brian; Zhang, Haotong; Zheng, Zheng
2015-01-01
We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show ...
State and parameter estimation in bio processes
Energy Technology Data Exchange (ETDEWEB)
Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1994-12-31
A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.
Estimation of transmitter and receiver code biases using concurrent GNSS and ionosonde measurements
Sapundjiev, Danislav; Stankov, Stan; Verhulst, Tobias
2016-07-01
The total electron content (TEC) is an important ionospheric characteristic used extensively in ionosphere / space research and in various positioning / navigation applications based on Global Navigation Satellite System (GNSS) signals. TEC calculations using dual-frequency GNSS receivers is the norm nowadays but, for calculation of the absolute TEC, the correct estimation of the Differential Code Biases (DCB) is crucial. Various methods for estimation of these biases are currently in use and most of them make several (rather strong) assumptions concerning the ionosphere structure and state which do not necessarily represent the real situation. In this presentation we explore the opportunities offered by the modern high-resolution digital ionosonde measurements to deduce key ionospheric properties / parameters in order to develop a new algorithm for real-time DCB estimation and evaluate its performance.
Jenness, Samuel M; Neaigus, Alan; Wendel, Travis; Gelpi-Acosta, Camila; Hagan, Holly
2014-12-01
Respondent-driven sampling (RDS) is a study design used to investigate populations for which a probabilistic sampling frame cannot be efficiently generated. Biases in parameter estimates may result from systematic non-random recruitment within social networks by geography. We investigate the spatial distribution of RDS recruits relative to an inferred social network among heterosexual adults in New York City in 2010. Mean distances between recruitment dyads are compared to those of network dyads to quantify bias. Spatial regression models are then used to assess the impact of spatial structure on risk and prevalence outcomes. In our primary distance metric, network dyads were an average of 1.34 (95 % CI 0.82–1.86) miles farther dispersed than recruitment dyads, suggesting spatial bias. However, there was no evidence that demographic associations with HIV risk or prevalence were spatially confounded. Therefore, while the spatial structure of recruitment may be biased in heterogeneous urban settings, the impact of this bias on estimates of outcome measures appears minimal.
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Paulson, Sonu Tabitha
2015-01-01
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the Planck likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window fun...
On Carleman estimates with two large parameters
Energy Technology Data Exchange (ETDEWEB)
Le Rousseau, Jerome, E-mail: jlr@univ-orleans.fr [Jerome Le Rousseau. Universite d' Orleans, Laboratoire Mathematiques et Applications, Physique Mathematique d' Orleans, CNRS UMR 6628, Federation Denis-Poisson, FR CNRS 2964, B.P. 6759, 45067 Orleans cedex 2 (France)
2011-04-01
We provide a general framework for the analysis and the derivation of Carleman estimates with two large parameters. For an appropriate form of weight functions strong pseudo-convexity conditions are shown to be necessary and sufficient.
Estimation of Modal Parameters and their Uncertainties
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Hadiyanto Hadiyanto; AJB van Boxtel
2012-01-01
Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Prediction and simulation errors in parameter estimation for nonlinear systems
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
Error covariance calculation for forecast bias estimation in hydrologic data assimilation
Pauwels, Valentijn R. N.; De Lannoy, Gabriëlle J. M.
2015-12-01
To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.
Person-Independent Head Pose Estimation Using Biased Manifold Embedding
Directory of Open Access Journals (Sweden)
Sethuraman Panchanathan
2008-02-01
Full Text Available Head pose estimation has been an integral problem in the study of face recognition systems and human-computer interfaces, as part of biometric applications. A fine estimate of the head pose angle is necessary and useful for several face analysis applications. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional image feature space. However, when there are face images of multiple individuals with varying pose angles, manifold learning techniques often do not give accurate results. In this work, we propose a framework for a supervised form of manifold learning called Biased Manifold Embedding to obtain improved performance in head pose angle estimation. This framework goes beyond pose estimation, and can be applied to all regression applications. This framework, although formulated for a regression scenario, unifies other supervised approaches to manifold learning that have been proposed so far. Detailed studies of the proposed method are carried out on the FacePix database, which contains 181 face images each of 30 individuals with pose angle variations at a granularity of 1Ã¢ÂˆÂ˜. Since biometric applications in the real world may not contain this level of granularity in training data, an analysis of the methodology is performed on sparsely sampled data to validate its effectiveness. We obtained up to 2Ã¢ÂˆÂ˜ average pose angle estimation error in the results from our experiments, which matched the best results obtained for head pose estimation using related approaches.
Estimation and analysis of Galileo differential code biases
Li, Min; Yuan, Yunbin; Wang, Ningbo; Li, Zishen; Li, Ying; Huo, Xingliang
2017-03-01
When sensing the Earth's ionosphere using dual-frequency pseudorange observations of global navigation satellite systems (GNSS), the satellite and receiver differential code biases (DCBs) account for one of the main sources of error. For the Galileo system, limited knowledge is available about the determination and characteristic analysis of the satellite and receiver DCBs. To better understand the characteristics of satellite and receiver DCBs of Galileo, the IGGDCB (IGG, Institute of Geodesy and Geophysics, Wuhan, China) method is extended to estimate the satellite and receiver DCBs of Galileo, with the combined use of GPS and Galileo observations. The experimental data were collected from the Multi-GNSS Experiment network, covering the period of 2013-2015. The stability of both Galileo satellite and receiver DCBs over a time period of 36 months was thereby analyzed for the current state of the Galileo system. Good agreement of Galileo satellite DCBs is found between the IGGDCB-based DCB estimates and those from the German Aerospace Center (DLR), at the level of 0.22 ns. Moreover, high-level stability of the Galileo satellite DCB estimates is obtained over the selected time span (less than 0.25 ns in terms of standard deviation) by both IGGDCB and DLR algorithms. The Galileo receiver DCB estimates are also relatively stable for the case in which the receiver hardware device stays unchanged. It can also be concluded that the receiver DCB estimates are rather sensitive to the change of the firmware version and that the receiver antenna type has no great impact on receiver DCBs.
Statistics of Parameter Estimates: A Concrete Example
Aguilar, Oscar
2015-01-01
© 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise levels, models, or prior knowledge. But what can we say about the validity of such estimates, and the influence of these assumptions? This paper is concerned with methods to address these questions, and for didactic purposes it is written in the context of a concrete nonlinear parameter estimation problem. We will use the results of a physical experiment conducted by Allmaras et al. at Texas A&M University [M. Allmaras et al., SIAM Rev., 55 (2013), pp. 149-167] to illustrate the importance of validation procedures for statistical parameter estimation. We describe statistical methods and data analysis tools to check the choices of likelihood and prior distributions, and provide examples of how to compare Bayesian results with those obtained by non-Bayesian methods based on different types of assumptions. We explain how different statistical methods can be used in complementary ways to improve the understanding of parameter estimates and their uncertainties.
The CLICopti RF structure parameter estimator
Sjobak, Kyrre Ness
2014-01-01
This document describes the CLICopti RF structure parameter estimator. This is a C++ library which makes it possible to quickly estimate the parameters of an RF structure from its length, apertures, tapering, and basic cell type. Typical estimated parameters are the input power required to reach a certain voltage with a given beam current, the maximum safe pulse length for a given input power and the minimum bunch spacing in RF cycles allowed by a given long-range wake limit. The document describes the implemented physics, usage of the library through its Application Programming Interface (API) and the relation between the different parts of the library. Also discussed is how the library is checked for correctness, and the example programs included with the sources are described.
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Parameter inference with estimated covariance matrices
Sellentin, Elena
2015-01-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalising over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate $t$-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalisation over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
Parameter Estimation of Turbo Code Encoder
Directory of Open Access Journals (Sweden)
Mehdi Teimouri
2014-01-01
Full Text Available The problem of reconstruction of a channel code consists of finding out its design parameters solely based on its output. This paper investigates the problem of reconstruction of parallel turbo codes. Reconstruction of a turbo code has been addressed in the literature assuming that some of the parameters of the turbo encoder, such as the number of input and output bits of the constituent encoders and puncturing pattern, are known. However in practical noncooperative situations, these parameters are unknown and should be estimated before applying reconstruction process. Considering such practical situations, this paper proposes a novel method to estimate the above-mentioned code parameters. The proposed algorithm increases the efficiency of the reconstruction process significantly by judiciously reducing the size of search space based on an analysis of the observed channel code output. Moreover, simulation results show that the proposed algorithm is highly robust against channel errors when it is fed with noisy observations.
LISA parameter estimation using numerical merger waveforms
Energy Technology Data Exchange (ETDEWEB)
Thorpe, J I; McWilliams, S T; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G, E-mail: James.I.Thorpe@nasa.go [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States)
2009-05-07
Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response, and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10{sup 6} M{sub o-dot} at a redshift of z approx 1 were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.
LISA parameter estimation using numerical merger waveforms
Thorpe, J I; Kelly, B J; Fahey, R P; Arnaud, K; Baker, J G
2008-01-01
Recent advances in numerical relativity provide a detailed description of the waveforms of coalescing massive black hole binaries (MBHBs), expected to be the strongest detectable LISA sources. We present a preliminary study of LISA's sensitivity to MBHB parameters using a hybrid numerical/analytic waveform for equal-mass, non-spinning holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of one million Solar masses at a redshift of one were found to decrease by a factor of slightly more than two for signals with merger as compared to signals truncated at the Schwarzchild ISCO.
Parameter Estimation of Noise Corrupted Sinusoids
O'Brien, Francis J; Johnnie, Nathan
2011-01-01
Existing algorithms for fitting the parameters of a sinusoid to noisy discrete time observations are not always successful due to initial value sensitivity and other issues. This paper demonstrates the techniques of FIR filtering, Fast Fourier Transform, and nonlinear least squares minimization as useful in the parameter estimation of amplitude, frequency and phase exemplified for a low-frequency time-delayed sinusoid describing simple harmonic motion. Alternative means are described for estimating frequency and phase angle. An autocorrelation function for harmonic motion is also derived.
Hurst Parameter Estimation Using Artificial Neural Networks
Directory of Open Access Journals (Sweden)
S..Ledesma-Orozco
2011-08-01
Full Text Available The Hurst parameter captures the amount of long-range dependence (LRD in a time series. There are severalmethods to estimate the Hurst parameter, being the most popular: the variance-time plot, the R/S plot, theperiodogram, and Whittle’s estimator. The first three are graphical methods, and the estimation accuracy depends onhow the plot is interpreted and calculated. In contrast, Whittle’s estimator is based on a maximum likelihood techniqueand does not depend on a graph reading; however, it is computationally expensive. A new method to estimate theHurst parameter is proposed. This new method is based on an artificial neural network. Experimental results showthat this method outperforms traditional approaches, and can be used on applications where a fast and accurateestimate of the Hurst parameter is required, i.e., computer network traffic control. Additionally, the Hurst parameterwas computed on series of different length using several methods. The simulation results show that the proposedmethod is at least ten times faster than traditional methods.
Multi-Parameter Estimation for Orthorhombic Media
Masmoudi, Nabil
2015-08-19
Building reliable anisotropy models is crucial in seismic modeling, imaging and full waveform inversion. However, estimating anisotropy parameters is often hampered by the trade off between inhomogeneity and anisotropy. For instance, one way to estimate the anisotropy parameters is to relate them analytically to traveltimes, which is challenging in inhomogeneous media. Using perturbation theory, we develop travel-time approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2 and a parameter Δγ in inhomogeneous background media. Specifically, our expansion assumes inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. This approach has two main advantages: in one hand, it provides a computationally efficient tool to solve the orthorhombic eikonal equation, on the other hand, it provides a mechanism to scan for the best fitting anisotropy parameters without the need for repetitive modeling of traveltimes, because the coefficients of the traveltime expansion are independent of the perturbed parameters. Furthermore, the coefficients of the traveltime expansion provide insights on the sensitivity of the traveltime with respect to the perturbed parameters. We show the accuracy of the traveltime approximations as well as an approach for multi-parameter scanning in orthorhombic media.
Discriminative Parameter Estimation for Random Walks Segmentation
Baudin, Pierre-Yves; Goodman, Danny; Kumar, Puneet; Azzabou, Noura; Carlier, Pierre G.; Paragios, Nikos; Pawan Kumar, M.
2013-01-01
International audience; The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challen...
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Parameter estimation for an expanding universe
Directory of Open Access Journals (Sweden)
Jieci Wang
2015-03-01
Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.
Parameter estimation methods for chaotic intercellular networks.
Directory of Open Access Journals (Sweden)
Inés P Mariño
Full Text Available We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
Using Digital Filtration for Hurst Parameter Estimation
Directory of Open Access Journals (Sweden)
J. Prochaska
2009-06-01
Full Text Available We present a new method to estimate the Hurst parameter. The method exploits the form of the autocorrelation function for second-order self-similar processes and is based on one-pass digital filtration. We compare the performance and properties of the new method with that of the most common methods.
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Parameter estimation in stochastic differential equations
Bishwal, Jaya P N
2008-01-01
Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Estimation of accuracy and bias in genetic evaluations with genetic groups using sampling
Hickey, J.M.; Keane, M.G.; Kenny, D.A.; Cromie, A.R.; Mulder, H.A.; Veerkamp, R.F.
2008-01-01
Accuracy and bias of estimated breeding values are important measures of the quality of genetic evaluations. A sampling method that accounts for the uncertainty in the estimation of genetic group effects was used to calculate accuracy and bias of estimated effects. The method works by repeatedly sim
Biases in atmospheric CO2 estimates from correlated meteorology modeling errors
Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.
2015-03-01
Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is
Parameter estimation in channel network flow simulation
Institute of Scientific and Technical Information of China (English)
Han Longxi
2008-01-01
Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.
Nonparametric estimation of location and scale parameters
Potgieter, C.J.
2012-12-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.
Multiple Parameter Estimation With Quantized Channel Output
Mezghani, Amine; Nossek, Josef A
2010-01-01
We present a general problem formulation for optimal parameter estimation based on quantized observations, with application to antenna array communication and processing (channel estimation, time-of-arrival (TOA) and direction-of-arrival (DOA) estimation). The work is of interest in the case when low resolution A/D-converters (ADCs) have to be used to enable higher sampling rate and to simplify the hardware. An Expectation-Maximization (EM) based algorithm is proposed for solving this problem in a general setting. Besides, we derive the Cramer-Rao Bound (CRB) and discuss the effects of quantization and the optimal choice of the ADC characteristic. Numerical and analytical analysis reveals that reliable estimation may still be possible even when the quantization is very coarse.
Estimating station noise thresholds for seismic magnitude bias elimination
Peacock, Sheila
2014-05-01
To eliminate the upward bias of seismic magnitude caused by censoring of signal hidden by noise, noise level at each station in a network must be estimated. Where noise levels are not measured directly, the method of Kelly and Lacoss (1969) has been used to infer them from bulletin data (Lilwall and Douglas 1984). To verify this estimate of noise level, noise thresholds of International Monitoring System (IMS) stations inferred from the International Data Centre (IDC) Reviewed Event Bulletin (REB) by the Kelly and Lacoss method for 2005-2013 are compared with direct measurements on (i) noise preceding first arrivals in filtered (0.8-4.5 Hz) IMS seismic data, and (ii) noise preceding the expected time of arrival of signals from events, where signal was not actually seen (values gathered by the IDC for maximum-likelihood magnitude calculation). For most stations the direct pre-signal noise measurements are ~0.25 units of log A/T lower than the Kelly&Lacoss thresholds; because the IDC automatic system declares a detection only when the short-term-average-to-long-term-average ratio threshold, which varies with station and frequency band between ~3-6, is exceeded. The noise values at expected times of non-observed signal arrival are ~0.15 units lower than the Kelly and Lacoss thresholds. Exceptions are caused by faulty channels being used for the direct noise or body-wave magnitude (mb) measurements or, for station ARCES and possibly FINES, SPITS and HFS, the wider filter used for signal amplitude than for signal detection admitting noise that swamped the signal. Abrupt changes in thresholds might show mis-documented sensor sensitivity changes at individual stations.
Sensor Placement for Modal Parameter Subset Estimation
DEFF Research Database (Denmark)
Ulriksen, Martin Dalgaard; Bernal, Dionisio; Damkilde, Lars
2016-01-01
The present paper proposes an approach for deciding on sensor placements in the context of modal parameter estimation from vibration measurements. The approach is based on placing sensors, of which the amount is determined a priori, such that the minimum Fisher information that the frequency...... responses carry on the selected modal parameter subset is, in some sense, maximized. The approach is validated in the context of a simple 10-DOF mass-spring-damper system by computing the variance of a set of identified modal parameters in a Monte Carlo setting for a set of sensor configurations, whose......). It is shown that the widely used Effective Independence (EI) method, which uses the modal amplitudes as surrogates for the parameters of interest, provides sensor configurations yielding theoretical lower bound variances whose maxima are up to 30 % larger than those obtained by use of the max-min approach....
A separated bias identification and state estimation algorithm for nonlinear systems
Caglayan, A. K.; Lancraft, R. E.
1983-01-01
A computational algorithm for the identification of biases in discrete-time, nonlinear, stochastic systems is derived by extending the separate bias estimation results for linear systems to the extended Kalman filter formulation. The merits of the approach are illustrated by identifying instrument biases using a terminal configured vehicle simulation.
On closure parameter estimation in chaotic systems
Directory of Open Access Journals (Sweden)
J. Hakkarainen
2012-02-01
Full Text Available Many dynamical models, such as numerical weather prediction and climate models, contain so called closure parameters. These parameters usually appear in physical parameterizations of sub-grid scale processes, and they act as "tuning handles" of the models. Currently, the values of these parameters are specified mostly manually, but the increasing complexity of the models calls for more algorithmic ways to perform the tuning. Traditionally, parameters of dynamical systems are estimated by directly comparing the model simulations to observed data using, for instance, a least squares approach. However, if the models are chaotic, the classical approach can be ineffective, since small errors in the initial conditions can lead to large, unpredictable deviations from the observations. In this paper, we study numerical methods available for estimating closure parameters in chaotic models. We discuss three techniques: off-line likelihood calculations using filtering methods, the state augmentation method, and the approach that utilizes summary statistics from long model simulations. The properties of the methods are studied using a modified version of the Lorenz 95 system, where the effect of fast variables are described using a simple parameterization.
Multi-Sensor Consensus Estimation of State, Sensor Biases and Unknown Input.
Zhou, Jie; Liang, Yan; Yang, Feng; Xu, Linfeng; Pan, Quan
2016-09-01
This paper addresses the problem of the joint estimation of system state and generalized sensor bias (GSB) under a common unknown input (UI) in the case of bias evolution in a heterogeneous sensor network. First, the equivalent UI-free GSB dynamic model is derived and the local optimal estimates of system state and sensor bias are obtained in each sensor node; Second, based on the state and bias estimates obtained by each node from its neighbors, the UI is estimated via the least-squares method, and then the state estimates are fused via consensus processing; Finally, the multi-sensor bias estimates are further refined based on the consensus estimate of the UI. A numerical example of distributed multi-sensor target tracking is presented to illustrate the proposed filter.
Parameter estimation using B-Trees
DEFF Research Database (Denmark)
Schmidt, Albrecht; Bøhlen, Michael H.
2004-01-01
This paper presents a method for accelerating algorithms for computing common statistical operations like parameter estimation or sampling on B-Tree indexed data; the work was carried out in the context of visualisation of large scientific data sets. The underlying idea is the following: the shape...... at opportunities and limitations of this approach for visualisation of large data sets. The advantages of the method are manifold. Not only does it enable advanced algorithms through a performance boost for basic operations like density estimation, but it also builds on functionality that is already present...
Quantum Estimation of Parameters of Classical Spacetimes
Downes, T G; Knill, E; Milburn, G J; Caves, C M
2016-01-01
We describe a quantum limit to measurement of classical spacetimes. Specifically, we formulate a quantum Cramer-Rao lower bound for estimating the single parameter in any one-parameter family of spacetime metrics. We employ the locally covariant formulation of quantum field theory in curved spacetime, which allows for a manifestly background-independent derivation. The result is an uncertainty relation that applies to all globally hyperbolic spacetimes. Among other examples, we apply our method to detection of gravitational waves using the electromagnetic field as a probe, as in laser-interferometric gravitational-wave detectors. Other applications are discussed, from terrestrial gravimetry to cosmology.
Rapid Compact Binary Coalescence Parameter Estimation
Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong
2016-03-01
The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.
CosmoSIS: modular cosmological parameter estimation
Zuntz, Joe; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James
2014-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
Renal parameter estimates in unrestrained dogs
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling
Institute of Scientific and Technical Information of China (English)
ZHOU Dan; LI Chengrong; WANG Zhongdong
2013-01-01
Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ～ 1 000 and a censoring rate of 90％,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90％ confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90％ confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.
Optimal design criteria - prediction vs. parameter estimation
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Online Dynamic Parameter Estimation of Synchronous Machines
West, Michael R.
Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
A two parameter ratio-product-ratio estimator using auxiliary information
Chami, Peter S; Thomas, Doneal
2012-01-01
We propose a two parameter ratio-product-ratio estimator for a finite population mean in a simple random sample without replacement following the methodology in Ray and Sahai (1980), Sahai and Ray (1980), Sahai and Sahai (1985) and Singh and Ruiz Espejo (2003). The bias and mean square error of our proposed estimator are obtained to the first degree of approximation. We derive conditions for the parameters under which the proposed estimator has smaller mean square error than the sample mean, ratio and product estimators. We carry out an application showing that the proposed estimator outperforms the traditional estimators using groundwater data taken from a geological site in the state of Florida.
Constrained low-cost GPS/INS filter with encoder bias estimation for ground vehicles' applications
Abdel-Hafez, Mamoun F.; Saadeddin, Kamal; Amin Jarrah, Mohammad
2015-06-01
In this paper, a constrained, fault-tolerant, low-cost navigation system is proposed for ground vehicle's applications. The system is designed to provide a vehicle navigation solution at 50 Hz by fusing the measurements of the inertial measurement unit (IMU), the global positioning system (GPS) receiver, and the velocity measurement from wheel encoders. A high-integrity estimation filter is proposed to obtain a high accuracy state estimate. The filter utilizes vehicle velocity constraints measurement to enhance the estimation accuracy. However, if the velocity measurement of the encoder is biased, the accuracy of the estimate is degraded. Therefore, a noise estimation algorithm is proposed to estimate a possible bias in the velocity measurement of the encoder. Experimental tests, with simulated biases on the encoder's readings, are conducted and the obtained results are presented. The experimental results show the enhancement in the estimation accuracy when the simulated bias is estimated using the proposed method.
Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly
Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.
2016-01-01
Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Parameter estimation in tree graph metabolic networks
Directory of Open Access Journals (Sweden)
Laura Astola
2016-09-01
Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Shimura, Masashi; Gosho, Masahiko; Hirakawa, Akihiro
2017-02-17
Group sequential designs are widely used in clinical trials to determine whether a trial should be terminated early. In such trials, maximum likelihood estimates are often used to describe the difference in efficacy between the experimental and reference treatments; however, these are well known for displaying conditional and unconditional biases. Established bias-adjusted estimators include the conditional mean-adjusted estimator (CMAE), conditional median unbiased estimator, conditional uniformly minimum variance unbiased estimator (CUMVUE), and weighted estimator. However, their performances have been inadequately investigated. In this study, we review the characteristics of these bias-adjusted estimators and compare their conditional bias, overall bias, and conditional mean-squared errors in clinical trials with survival endpoints through simulation studies. The coverage probabilities of the confidence intervals for the four estimators are also evaluated. We find that the CMAE reduced conditional bias and showed relatively small conditional mean-squared errors when the trials terminated at the interim analysis. The conditional coverage probability of the conditional median unbiased estimator was well below the nominal value. In trials that did not terminate early, the CUMVUE performed with less bias and an acceptable conditional coverage probability than was observed for the other estimators. In conclusion, when planning an interim analysis, we recommend using the CUMVUE for trials that do not terminate early and the CMAE for those that terminate early. Copyright © 2017 John Wiley & Sons, Ltd.
Parameter estimation for lithium ion batteries
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Parameter Estimation in Active Plate Structures
DEFF Research Database (Denmark)
Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.;
2006-01-01
In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...... through gradient based optimization techniques, while the second is based on a metamodel of the inverse problem, using artificial neural networks. A numerical higher order finite element laminated plate model is used in both methods and results are compared and discussed through a simulated...
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Estimation of Model Parameters for Steerable Needles.
Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.
A Consistent Direct Method for Estimating Parameters in Ordinary Differential Equations Models
Holte, Sarah E.
2016-01-01
Ordinary differential equations provide an attractive framework for modeling temporal dynamics in a variety of scientific settings. We show how consistent estimation for parameters in ODE models can be obtained by modifying a direct (non-iterative) least squares method similar to the direct methods originally developed by Himmelbau, Jones and Bischoff. Our method is called the bias-corrected least squares (BCLS) method since it is a modification of least squares methods known to be biased. Co...
Estimating Infiltration Parameters from Basic Soil Properties
van de Genachte, G.; Mallants, D.; Ramos, J.; Deckers, J. A.; Feyen, J.
1996-05-01
Infiltration data were collected on two rectangular grids with 25 sampling points each. Both experimental grids were located in tropical rain forest (Guyana), the first in an Arenosol area and the second in a Ferralsol field. Four different infiltration models were evaluated based on their performance in describing the infiltration data. The model parameters were estimated using non-linear optimization techniques. The infiltration behaviour in the Ferralsol was equally well described by the equations of Philip, Green-Ampt, Kostiakov and Horton. For the Arenosol, the equations of Philip, Green-Ampt and Horton were significantly better than the Kostiakov model. Basic soil properties such as textural composition (percentage sand, silt and clay), organic carbon content, dry bulk density, porosity, initial soil water content and root content were also determined for each sampling point of the two grids. The fitted infiltration parameters were then estimated based on other soil properties using multiple regression. Prior to the regression analysis, all predictor variables were transformed to normality. The regression analysis was performed using two information levels. The first information level contained only three texture fractions for the Ferralsol (sand, silt and clay) and four fractions for the Arenosol (coarse, medium and fine sand, and silt and clay). At the first information level the regression models explained up to 60% of the variability of some of the infiltration parameters for the Ferralsol field plot. At the second information level the complete textural analysis was used (nine fractions for the Ferralsol and six for the Arenosol). At the second information level a principal components analysis (PCA) was performed prior to the regression analysis to overcome the problem of multicollinearity among the predictor variables. Regression analysis was then carried out using the orthogonally transformed soil properties as the independent variables. Results for
Influence of parameter estimation uncertainty in Kriging: Part 1 - Theoretical Development
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available This paper deals with a theoretical approach to assessing the effects of parameter estimation uncertainty both on Kriging estimates and on their estimated error variance. Although a comprehensive treatment of parameter estimation uncertainty is covered by full Bayesian Kriging at the cost of extensive numerical integration, the proposed approach has a wide field of application, given its relative simplicity. The approach is based upon a truncated Taylor expansion approximation and, within the limits of the proposed approximation, the conventional Kriging estimates are shown to be biased for all variograms, the bias depending upon the second order derivatives with respect to the parameters times the variance-covariance matrix of the parameter estimates. A new Maximum Likelihood (ML estimator for semi-variogram parameters in ordinary Kriging, based upon the assumption of a multi-normal distribution of the Kriging cross-validation errors, is introduced as a mean for the estimation of the parameter variance-covariance matrix. Keywords: Kriging, maximum likelihood, parameter estimation, uncertainty
Bias-corrected estimation in potentially mildly explosive autoregressive models
DEFF Research Database (Denmark)
Haufmann, Hendrik; Kruse, Robinson
that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...
Fast cosmological parameter estimation using neural networks
Auld, T; Hobson, M P; Gull, S F
2006-01-01
We present a method for accelerating the calculation of CMB power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called CosmoNet, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released Pico algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of CosmoNet by computing CMB power spectra over a box in the parameter space of flat \\Lambda CDM models containing the 3\\sigma WMAP1 confidence region. We also use CosmoNet to compute the WMAP3 likelihood for flat \\Lambda CDM models and show that marginalised posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3% of cosmic variance, and that CosmoNet is \\sim 7 \\times 10^4 faster than CAMB (for flat ...
Cosmological parameter estimation: impact of CMB aberration
Catena, Riccardo
2012-01-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a_lm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l=1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fidu...
Directory of Open Access Journals (Sweden)
Emad Habib
2014-07-01
Full Text Available Results of numerous evaluation studies indicated that satellite-rainfall products are contaminated with significant systematic and random errors. Therefore, such products may require refinement and correction before being used for hydrologic applications. In the present study, we explore a rainfall-runoff modeling application using the Climate Prediction Center-MORPHing (CMORPH satellite rainfall product. The study area is the Gilgel Abbay catchment situated at the source basin of the Upper Blue Nile basin in Ethiopia, Eastern Africa. Rain gauge networks in such area are typically sparse. We examine different bias correction schemes applied locally to the CMORPH product. These schemes vary in the degree to which spatial and temporal variability in the CMORPH bias fields are accounted for. Three schemes are tested: space and time-invariant, time-variant and spatially invariant, and space and time variant. Bias-corrected CMORPH products were used to calibrate and drive the Hydrologiska Byråns Vattenbalansavdelning (HBV rainfall-runoff model. Applying the space and time-fixed bias correction scheme resulted in slight improvement of the CMORPH-driven runoff simulations, but in some instances caused deterioration. Accounting for temporal variation in the bias reduced the rainfall bias by up to 50%. Additional improvements were observed when both the spatial and temporal variability in the bias was accounted for. The rainfall bias was found to have a pronounced effect on model calibration. The calibrated model parameters changed significantly when using rainfall input from gauges alone, uncorrected, and bias-corrected CMORPH estimates. Changes of up to 81% were obtained for model parameters controlling the stream flow volume.
Finite-Sample Bias Propagation in Autoregressive Estimation With the Yule–Walker Method
Broersen, P.M.T.
2009-01-01
The Yule-Walker (YW) method for autoregressive (AR) estimation uses lagged-product (LP) autocorrelation estimates to compute an AR parametric spectral model. The LP estimates only have a small triangular bias in the estimated autocorrelation function and are asymptotically unbiased. However, using t
Helsel, D.R.; Gilliom, R.J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.
Noncoherent sampling technique for communications parameter estimations
Su, Y. T.; Choi, H. J.
1985-01-01
This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.
Parameter estimation in LISA Pathfinder operational exercises
Nofrarias, Miquel; Congedo, Giuseppe; Hueller, Mauro; Armano, M; Diaz-Aguilo, M; Grynagier, A; Hewitson, M
2011-01-01
The LISA Pathfinder data analysis team has been developing in the last years the infrastructure and methods required to run the mission during flight operations. These are gathered in the LTPDA toolbox, an object oriented MATLAB toolbox that allows all the data analysis functionalities for the mission, while storing the history of all operations performed to the data, thus easing traceability and reproducibility of the analysis. The parameter estimation methods in the toolbox have been applied recently to data sets generated with the OSE (Off-line Simulations Environment), a detailed LISA Pathfinder non-linear simulator that will serve as a reference simulator during mission operations. These operational exercises aim at testing the on-orbit experiments in a realistic environment in terms of software and time constraints. These simulations, so called operational exercises, are the last verification step before translating these experiments into tele-command sequences for the spacecraft, producing therefore ve...
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim
2014-01-01
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Bias in estimating food consumption of fish from stomach-content analysis
DEFF Research Database (Denmark)
Rindorf, Anna; Lewy, Peter
2004-01-01
This study presents an analysis of the bias introduced by using simplified methods to calculate food intake of fish from stomach contents. Three sources of bias were considered: (1) the effect of estimating consumption based on a limited number of stomach samples, (2) the effect of using average ...
Zhang, Yu; Seo, Dong-Jun
2017-03-01
This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.
Khaemba, W.; Stein, A.
2002-01-01
Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like
Bias in the Weibull Strength Estimation of a SiC Fiber for the Small Gauge Length Case
Morimoto, Tetsuya; Nakagawa, Satoshi; Ogihara, Shinji
It is known that the single-modal Weibull model describes well the size effect of brittle fiber tensile strength. However, some ceramic fibers have been reported that single-modal Weibull model provided biased estimation on the gauge length dependence. A hypothesis on the bias is that the density of critical defects is very small, thus, fracture probability of small gauge length samples distributes in discrete manner, which makes the Weibull parameters dependent on the gauge length. Tyranno ZMI Si-Zr-C-O fiber has been selected as an example fiber. The tensile tests have been done on several gauge lengths. The derived Weibull parameters have shown a dependence on the gauge length. Fracture surfaces were observed with SEM. Then we classified the fracture surfaces into the characteristic fracture patterns. Percentage of each fracture pattern was found dependent on the gauge length, too. This may be an important factor of the Weibull parameter dependence on the gauge length.
Directory of Open Access Journals (Sweden)
James O Lloyd-Smith
Full Text Available BACKGROUND: The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored. METHODOLOGY: This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks. CONCLUSIONS: Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
Multiple nonlinear parameter estimation using PI feedback control
Lith, van P. F.; Witteveen, H.; Betlem, B.H.L.; Roffel, B.
2001-01-01
Nonlinear parameters often need to be estimated during the building of chemical process models. To accomplish this, many techniques are available. This paper discusses an alternative view to parameter estimation, where the concept of PI feedback control is used to estimate model parameters. The appr
Abate, Alexandra; Teodoro, Luis F A; Warren, Michael S; Hendry, Martin
2008-01-01
We investigate methods to best estimate the normalisation of the mass density fluctuation power spectrum (sigma_8) using peculiar velocity data from a survey like the Six degree Field Galaxy Velocity Survey (6dFGSv). We focus on two potential problems (i) biases from nonlinear growth of structure and (ii) the large number of velocities in the survey. Simulations of LambdaCDM-like models are used to test the methods. We calculate the likelihood from a full covariance matrix of velocities averaged in grid cells. This simultaneously reduces the number of data points and smooths out nonlinearities which tend to dominate on small scales. We show how the averaging can be taken into account in the predictions in a practical way, and show the effect of the choice of cell size. We find that a cell size can be chosen that significantly reduces the nonlinearities without significantly increasing the error bars on cosmological parameters. We compare our results with those from a principal components analysis following Wa...
Systematic biases on galaxy haloes parameters from Yukawa-like gravitational potentials
Cardone, V F
2011-01-01
A viable alternative to the dark energy as a solution of the cosmic speed up problem is represented by Extended Theories of Gravity. Should this be indeed the case, there will be an impact not only on cosmological scales, but also at any scale, from the Solar System to extragalactic ones. In particular, the gravitational potential can be different from the Newtonian one commonly adopted when computing the circular velocity fitted to spiral galaxies rotation curves. Phenomenologically modelling the modified point mass potential as the sum of a Newtonian and a Yukawa like correction, we simulate observed rotation curves for a spiral galaxy described as the sum of an exponential disc and a NFW dark matter halo. We then fit these curves assuming parameterized halo models (either with an inner cusp or a core) and using the Newtonian potential to estimate the theoretical rotation curve. Such a study allows us to investigate the bias on the disc and halo model parameters induced by the systematic error induced by fo...
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Directory of Open Access Journals (Sweden)
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Estimation of high altitude Martian dust parameters
Pabari, Jayesh; Bhalodi, Pinali
2016-07-01
Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Health indicators: eliminating bias from convenience sampling estimators.
Hedt, Bethany L; Pagano, Marcello
2011-02-28
Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Attitude and gyro bias estimation by the rotation of an inertial measurement unit
Wu, Zheming; Sun, Zhenguo; Zhang, Wenzeng; Chen, Qiang
2015-12-01
In navigation applications, the presence of an unknown bias in the measurement of rate gyros is a key performance-limiting factor. In order to estimate the gyro bias and improve the accuracy of attitude measurement, we proposed a new method which uses the rotation of an inertial measurement unit, which is independent from rigid body motion. By actively changing the orientation of the inertial measurement unit (IMU), the proposed method generates sufficient relations between the gyro bias and tilt angle (roll and pitch) error via ridge body dynamics, and the gyro bias, including the bias that causes the heading error, can be estimated and compensated. The rotation inertial measurement unit method makes the gravity vector measured from the IMU continuously change in a body-fixed frame. By theoretically analyzing the mathematic model, the convergence of the attitude and gyro bias to the true values is proven. The proposed method provides a good attitude estimation using only measurements from an IMU, when other sensors such as magnetometers and GPS are unreliable. The performance of the proposed method is illustrated under realistic robotic motions and the results demonstrate an improvement in the accuracy of the attitude estimation.
Directory of Open Access Journals (Sweden)
Orlov A. I.
2015-05-01
Full Text Available According to the new paradigm of applied mathematical statistics one should prefer non-parametric methods and models. However, in applied statistics we currently use a variety of parametric models. The term "parametric" means that the probabilistic-statistical model is fully described by a finite-dimensional vector of fixed dimension, and this dimension does not depend on the size of the sample. In parametric statistics the estimation problem is to estimate the unknown value (for statistician of parameter by means of the best (in some sense method. In the statistical problems of standardization and quality control we use a three-parameter family of gamma distributions. In this article, it is considered as an example of the parametric distribution family. We compare the methods for estimating the parameters. The method of moments is universal. However, the estimates obtained with the help of method of moments have optimal properties only in rare cases. Maximum likelihood estimation (MLE belongs to the class of the best asymptotically normal estimates. In most cases, analytical solutions do not exist; therefore, to find MLE it is necessary to apply numerical methods. However, the use of numerical methods creates numerous problems. Convergence of iterative algorithms requires justification. In a number of examples of the analysis of real data, the likelihood function has many local maxima, and because of that natural iterative procedures do not converge. We suggest the use of one-step estimates (OS-estimates. They have equally good asymptotic properties as the maximum likelihood estimators, under the same conditions of regularity that MLE. One-step estimates are written in the form of explicit formulas. In this article it is proved that the one-step estimates are the best asymptotically normal estimates (under natural conditions. We have found OS-estimates for the gamma distribution and given the results of calculations using data on operating time
Univariate and Default Standard Unit Biases in Estimation of Body Weight and Caloric Content
Geier, Andrew B.; Rozin, Paul
2009-01-01
College students estimated the weight of adult women from either photographs or a live presentation by a set of models and estimated the calories in 1 of 2 actual meals. The 2 meals had the same items, but 1 had larger portion sizes than the other. The results suggest: (a) Judgments are biased toward transforming the example in question to the…
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Directory of Open Access Journals (Sweden)
Charlotte Soneson
Full Text Available With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects" as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies.The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects.We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control' or group 2 (e.g., 'treated'. We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects.We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM, k-nearest neighbors (kNN and Random Forests (RF. Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
DEFF Research Database (Denmark)
Jørgensen, Bent; Demétrio, Clarice G.B.; Kristensen, Erik
2011-01-01
Estimation of Taylor’s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating...
Starman, Jared; Tognina, Carlo; Virshup, Gary; Star-lack, Josh; Mollov, Ivan; Fahrig, Rebecca
2008-03-01
Digital flat panel a-Si x-ray detectors can exhibit image lag of several percent. The image lag can limit the temporal resolution of the detector, and introduce artifacts into CT reconstructions. It is believed that the majority of image lag is due to defect states, or traps, in the a-Si layer. Software methods to characterize and correct for the image lag exist, but they may make assumptions such as the system behaves in a linear time-invariant manner. The proposed method of reducing lag is a hardware solution that makes few additional hardware changes. For pulsed irradiation, the proposed method inserts a new stage in between the readout of the detector and the data collection stages. During this stage the photodiode is operated in a forward bias mode, which fills the defect states with charge. Parameters of importance are current per diode and current duration, which were investigated under light illumination by the following design parameters: 1.) forward bias voltage across the photodiode and TFT switch, 2.) number of rows simultaneously forward biased, and 3.) duration of the forward bias current. From measurements, it appears that good design criteria for the particular imager used are 8 or fewer active rows, 2.9V (or greater) forward bias voltage, and a row frequency of 100 kHz or less. Overall, the forward bias method has been found to reduce first frame lag by as much as 95%. The panel was also tested under x-ray irradiation. Image lag improved (94% reduction), but the temporal response of the scintillator became evident in the turn-on step response.
Bayesian parameter estimation by continuous homodyne detection
Kiilerich, Alexander Holm; Mølmer, Klaus
2016-09-01
We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal contribute to the ultimate sensitivity limit of homodyne detection.
Bayesian parameter estimation by continuous homodyne detection
DEFF Research Database (Denmark)
Kiilerich, Alexander Holm; Molmer, Klaus
2016-01-01
and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal......We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times...
Baker Syed; Poskar C; Junker Björn
2011-01-01
Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...
An approach for parameter estimation of biotechnological processes
Energy Technology Data Exchange (ETDEWEB)
Ljubenova, V. (Central Lab. of Bioinstrumentation and Automation, Bulgarian Academy of Sciences, Sofia (Bulgaria)); Ignatova, M.
1994-08-01
An approach for parameter estimators design of biotechnological processes (BTP) is presented in case of lack of real time information about state variables. It is based on general reaction rate models and measurements of at least one reaction rate. A general parameter estimator of BTP is designed with the help of which specific rate estimators are synthesized. Stability and convergence of an estimator of specific growth rate for a class of aerobic batch processes are proved. Its effectiveness is illustrated by simulation results. The proposed on-line parameter estimation approach can be used for design of BTP on-line variable estimation algorithms (variable observers of BTP). (orig.)
Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model
DEFF Research Database (Denmark)
Åberg, Andreas; Widd, Anders; Abildskov, Jens;
2016-01-01
A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests, or p...
Adaptive on-line estimation and control of overlay tool bias
Martinez, Victor M.; Finn, Karen; Edgar, Thomas F.
2003-06-01
Modern lithographic manufacturing processes rely on various types of exposure tools, used in a mix-and-match fashion. The motivation to use older tools alongside state-of-the-art tools is lower cost and one of the tradeoffs is a degradation in overlay performance. While average prices of semiconductor products continue to fall, the cost of manufacturing equipment rises with every product generation. Lithography processing, including the cost of ownership for tools, accounts for roughly 30% of the wafer processing costs, thus the importance of mix-and-match strategies. Exponentially Weighted Moving Average (EWMA) run-by-run controllers are widely used in the semiconductor manufacturing industry. This type of controller has been implemented successfully in volume manufacturing, improving Cpk values dramatically in processes like photolithography and chemical mechanical planarization. This simple, but powerful control scheme is well suited for adding corrections to compensate for Overlay Tool Bias (OTB). We have developed an adaptive estimation technique to compensate for overlay variability due to differences in the processing tools. The OTB can be dynamically calculated for each tool, based on the most recent measurements available, and used to correct the control variables. One approach to tracking the effect of different tools is adaptive modeling and control. The basic premise of an adaptive system is to change or adapt the controller as the operating conditions of the system change. Using closed-loop data, the adaptive control algorithm estimates the controller parameters using a recursive estimation technique. Once an updated model of the system is available, modelbased control becomes feasible. In the simplest scenario, the control law can be reformulated to include the current state of the tool (or its estimate) to compensate dynamically for OTB. We have performed simulation studies to predict the impact of deploying this strategy in production. The results
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Walker, Neff; Hill, Kenneth; Zhao, Fengmin
2012-01-01
In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME) uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions) gives results qualitatively similar to those of other studies.
Directory of Open Access Journals (Sweden)
Neff Walker
Full Text Available In most low- and middle-income countries, child mortality is estimated from data provided by mothers concerning the survival of their children using methods that assume no correlation between the mortality risks of the mothers and those of their children. This assumption is not valid for populations with generalized HIV epidemics, however, and in this review, we show how the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME uses a cohort component projection model to correct for AIDS-related biases in the data used to estimate trends in under-five mortality. In this model, births in a given year are identified as occurring to HIV-positive or HIV-negative mothers, the lives of the infants and mothers are projected forward using survivorship probabilities to estimate survivors at the time of a given survey, and the extent to which excess mortality of children goes unreported because of the deaths of HIV-infected mothers prior to the survey is calculated. Estimates from the survey for past periods can then be adjusted for the estimated bias. The extent of the AIDS-related bias depends crucially on the dynamics of the HIV epidemic, on the length of time before the survey that the estimates are made for, and on the underlying non-AIDS child mortality. This simple methodology (which does not take into account the use of effective antiretroviral interventions gives results qualitatively similar to those of other studies.
线性模型中基于SVD的一个新的有偏估计类%A Class of Biased Estimators Based on SVD in Linear Model
Institute of Scientific and Technical Information of China (English)
归庆明; 段清堂; 郭建锋; 周巧云
2003-01-01
In this paper, a class of new biased estimators for linear model is proposed by modifying the singular values of the design matrix so as to directly overcome the difficulties caused by ill-conditioning in the design matrix. Some important properties of these new estimators are obtained. By appropriate choices of the biased parameters, we construct many useful and important estimators. An application of these new estimators in three-dimensional position adjustment by distance in a spatial coordiate surveys is given. The results show that the proposed biased estimators can effectively overcome ill-conditioning and their numerical stabilities are preferable to ordinary least square estimation.
METHOD ON ESTIMATION OF DRUG'S PENETRATED PARAMETERS
Institute of Scientific and Technical Information of China (English)
刘宇红; 曾衍钧; 许景锋; 张梅
2004-01-01
Transdermal drug delivery system (TDDS) is a new method for drug delivery. The analysis of plenty of experiments in vitro can lead to a suitable mathematical model for the description of the process of the drug's penetration through the skin, together with the important parameters that are related to the characters of the drugs.After the research work of the experiments data,a suitable nonlinear regression model was selected. Using this model, the most important parameter-penetrated coefficient of 20 drugs was computed.In the result one can find, this work supports the theory that the skin can be regarded as singular membrane.
THEORETICAL ANALYSIS AND PRACTICE ON THE SELECTION OF KEY PARAMETERS FOR HORIZONTAL BIAS BURNER
Institute of Scientific and Technical Information of China (English)
刘泰生; 许晋源
2003-01-01
The air flow ratio and the pulverized-coal mass flux ratio between the rich and lean sides are the key parameters of horizontal bias burner. In order to realize high combustion efficiency, excellent stability of ignition, low NOx emission and safe operation, six principal demands are presented on the selection of key parameters. An analytical model is established on the basis of the demands, the fundamentals of combustion and the operation results. An improved horizontal bias burner is also presented and applied. The experiment and numerical simulation results show the improved horizontal bias burner can realize proper key parameters, lower NOx emission, high combustion efficiency and excellent performance of part load operation without oil support. It also can reduce the circumfluence and low velocity zone existing at the downstream sections of vanes, and avoid the burnout of the lean primary-air nozzle and the jam in the lean primary-air channel. The operation and test results verify the reasonableness and feasibility of the analytical model.
A Comparative Study of Distribution System Parameter Estimation Methods
Energy Technology Data Exchange (ETDEWEB)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation
DEFF Research Database (Denmark)
Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri
2014-01-01
) algorithm, which estimates the atom parameters along with the model order and weighting coefficients. Numerical experiments for spectral estimation with closely-spaced frequency components, show that the proposed SBL algorithm outperforms subspace and compressed sensing methods....
Estimation of motility parameters from trajectory data
DEFF Research Database (Denmark)
Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;
2015-01-01
Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little...... to which similar results may be obtained also for self-propelled particles....
A Modified Extended Bayesian Method for Parameter Estimation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
Parameter estimation and error analysis in environmental modeling and computation
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
A field test of the extent of bias in selection estimates after accounting for emigration
Letcher, B.H.; Horton, G.E.; Dubreuil, T.L.; O'Donnell, M. J.
2005-01-01
Question: To what extent does trait-dependent emigration bias selection estimates in a natural system? Organisms: Two freshwater cohorts of Atlantic salmon (Salmo salar) juveniles. Field site: A 1 km stretch of a small stream (West Brook) in western Massachusetts. USA from which emigration could be detected continuously. Methods: Estimated viability selection differentials for body size either including or ignoring emigration (include = emigrants survived interval, ignore = emigrants did not survive interval) for 12 intervals. Results: Seasonally variable size-related emigration from our study site generated variable levels of bias in selection estimates for body size. The magnitude of this bias was closely related with the extent of size-dependent emigration during each interval. Including or ignoring the effects of emigration changed the significance of selection estimates in 5 of the 12 intervals, and changed the estimated direction of selection in 4 of the 12 intervals. These results indicate the extent to which inferences about selection in a natural system can be biased by failing to account for trait-dependent emigration. ?? 2005 Benjamin H. Letcher.
Parameter estimation using compensatory neural networks
Indian Academy of Sciences (India)
M Sinha; P K Kalra; K Kumar
2000-04-01
Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher neuron model (multiplicative aggregation function). It can adapt to standard neuron and higher order neuron, as well as a combination of the two. This approach is found to estimate the orbit with accuracy significantly better than Kalman Filter (KF) and Feedforward Multilayer Neural Network (FMNN) (also simply referred to as Artificial Neural Network, ANN) with lambda-gamma learning. The typical simulation runs also bring out the superiority of the proposed scheme over Kalman filter from the standpoint of computation time and the amount of data needed for the desired degree of estimated accuracy for the specific problem of orbit determination.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.
Bias in estimating accuracy of a binary screening test with differential disease verification.
Alonzo, Todd A; Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2011-07-10
Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies, it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used, an imperfect reference test that is less than 100 per cent sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem.
Upadhyay, J; Popović, S; Valente-Feliciano, A -M; Phillips, L; Vušković, L
2015-01-01
An rf coaxial capacitively coupled Ar/Cl2 plasma is applied to processing the inner wall of superconducting radio frequency cavities. A dc self-bias potential is established across the inner electrode sheath due to the surface area difference between inner and outer electrodes of the coaxial plasma. The self-bias potential measurement is used as an indication of the plasma sheath voltage asymmetry. The understanding of the asymmetry in sheath voltage distribution in coaxial plasma is important for the modification of the inner surfaces of three dimensional objects. The plasma sheath voltages were tailored to process the outer wall by providing an additional dc current to the inner electrode with the help of an external dc power supply. The dc self-bias potential is measured for different diameter electrodes and its variation on process parameters such as gas pressure, rf power and percentage of chlorine in the Ar/Cl2 gas mixture is studied. The dc current needed to overcome the self-bias potential to make it ...
Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro
Directory of Open Access Journals (Sweden)
Guohu Feng
2013-02-01
Full Text Available An actual account of the angle random walk (ARW coefficients of gyros in the constant rate biased rate ring laser gyro (RLG inertial navigation system (INS is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS.
Parameter Estimation for Improving Association Indicators in Binary Logistic Regression
Directory of Open Access Journals (Sweden)
Mahdi Bashiri
2012-02-01
Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.
Estimating input parameters from intracellular recordings in the Feller neuronal model
Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta
2010-03-01
We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.
Estimation of motility parameters from trajectory data
DEFF Research Database (Denmark)
Vestergaard, Christian L.; Pedersen, Jonas Nyvold; Mortensen, Kim I.;
2015-01-01
Given a theoretical model for a self-propelled particle or micro-organism, how does one optimally determine the parameters of the model from experimental data in the form of a time-lapse recorded trajectory? For very long trajectories, one has very good statistics, and optimality may matter little....... However, for biological micro-organisms, one may not control the duration of recordings, and then optimality can matter. This is especially the case if one is interested in individuality and hence cannot improve statistics by taking population averages over many trajectories. One can learn much about...
Directory of Open Access Journals (Sweden)
B. Bisselink
2016-12-01
New hydrological insights: Results indicate large discrepancies in terms of the linear correlation (r, bias (β and variability (γ between the observed and simulated streamflows when using different precipitation estimates as model input. The best model performance was obtained with products which ingest gauge data for bias correction. However, catchment behavior was difficult to be captured using a single parameter set and to obtain a single robust parameter set for each catchment, which indicate that transposing model parameters should be carried out with caution. Model parameters depend on the precipitation characteristics of the calibration period and should therefore only be used in target periods with similar precipitation characteristics (wet/dry.
Xue, Junchen; Song, Shuli; Zhu, Wenyao
2016-04-01
Differential code biases (DCBs) are important parameters that must be estimated accurately and reliably for high-precision GNSS applications. For optimal operational service performance of the Beidou navigation system (BDS), continuous monitoring and constant quality assessment of the BDS satellite DCBs are crucial. In this study, a global ionospheric model was constructed based on a dual system BDS/GPS combination. Daily BDS DCBs were estimated together with the total electron content from 23 months' multi-GNSS observations. The stability of the resulting BDS DCB estimates was analyzed in detail. It was found that over a long period, the standard deviations (STDs) for all satellite B1-B2 DCBs were within 0.3 ns (average: 0.19 ns) and for all satellite B1-B3 DCBs, the STDs were within 0.36 ns (average: 0.22 ns). For BDS receivers, the STDs were greater than for the satellites, with most values BDS satellite DCBs between two consecutive days was BDS DCBs, they only require occasional estimation or calibration. Furthermore, the 30-day averaged satellite DCBs can be used reliably for the most demanding BDS applications.
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
Student Sorting and Bias in Value-Added Estimation: Selection on Observables and Unobservables
Rothstein, Jesse
2009-01-01
Nonrandom assignment of students to teachers can bias value-added estimates of teachers' causal effects. Rothstein (2008, 2010) shows that typical value-added models indicate large counterfactual effects of fifth-grade teachers on students' fourth-grade learning, indicating that classroom assignments are far from random. This article quantifies…
An empirical study on memory bias situations and correction strategies in ERP effort estimation
Erasmus, Pierre; Daneva, Maya; Amrahamsson, Pekka; Corral, Luis; Olivo, Markku; Russo, Barbara
2016-01-01
An Enterprise Resource Planning (ERP) project estimation process often relies on experts of various backgrounds to contribute judgments based on their professional experience. Such expert judgments however may not be biasfree. De-biasing techniques therefore have been proposed in the software estima
Shape parameter estimate for a glottal model without time position
Degottex, Gilles; Roebel, Axel; Rodet, Xavier
2009-01-01
cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...
Control and Estimation of Distributed Parameter Systems
Kappel, F; Kunisch, K
1998-01-01
Consisting of 23 refereed contributions, this volume offers a broad and diverse view of current research in control and estimation of partial differential equations. Topics addressed include, but are not limited to - control and stability of hyperbolic systems related to elasticity, linear and nonlinear; - control and identification of nonlinear parabolic systems; - exact and approximate controllability, and observability; - Pontryagin's maximum principle and dynamic programming in PDE; and - numerics pertinent to optimal and suboptimal control problems. This volume is primarily geared toward control theorists seeking information on the latest developments in their area of expertise. It may also serve as a stimulating reader to any researcher who wants to gain an impression of activities at the forefront of a vigorously expanding area in applied mathematics.
Two-state filtering for joint state-parameter estimation
Santitissadeekorn, Naratip
2014-01-01
This paper presents an approach for simultaneous estimation of the state and unknown parameters in a sequential data assimilation framework. The state augmentation technique, in which the state vector is augmented by the model parameters, has been investigated in many previous studies and some success with this technique has been reported in the case where model parameters are additive. However, many geophysical or climate models contains non-additive parameters such as those arising from physical parametrization of sub-grid scale processes, in which case the state augmentation technique may become ineffective since its inference about parameters from partially observed states based on the cross covariance between states and parameters is inadequate if states and parameters are not linearly correlated. In this paper, we propose a two-stages filtering technique that runs particle filtering (PF) to estimate parameters while updating the state estimate using Ensemble Kalman filter (ENKF; these two "sub-filters" ...
Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Catherine Giles; Amber Hsiao; Y Claire Wang
2016-01-01
Background: State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods: Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National He...
Observer design for position and velocity bias estimation from a single direction output
Le Bras, Florent; Hamel, Tarek; Mahony, Robert; Samson, Claude
2015-01-01
International audience; This paper addresses the problem of estimating the position of an object moving in R n from direction and velocity measurements. After addressing observability issues associated with this problem, a nonlinear observer is designed so as to encompass the case where the measured velocity is corrupted by a constant bias. Global exponential convergence of the estimation error is proved under a condition of persistent excitation upon the direction measurements. Simulation re...
Tu, Xiaoguang; Gao, Jingjing; Zhu, Chongjing; Cheng, Jie-Zhi; Ma, Zheng; Dai, Xin; Xie, Mei
2016-12-01
Though numerous segmentation algorithms have been proposed to segment brain tissue from magnetic resonance (MR) images, few of them consider combining the tissue segmentation and bias field correction into a unified framework while simultaneously removing the noise. In this paper, we present a new unified MR image segmentation algorithm whereby tissue segmentation, bias correction and noise reduction are integrated within the same energy model. Our method is presented by a total variation term introduced to the coherent local intensity clustering criterion function. To solve the nonconvex problem with respect to membership functions, we add auxiliary variables in the energy function such as Chambolle's fast dual projection method can be used and the optimal segmentation and bias field estimation can be achieved simultaneously throughout the reciprocal iteration. Experimental results show that the proposed method has a salient advantage over the other three baseline methods on either tissue segmentation or bias correction, and the noise is significantly reduced via its applications on highly noise-corrupted images. Moreover, benefiting from the fast convergence of the proposed solution, our method is less time-consuming and robust to parameter setting.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Statistical methods for cosmological parameter selection and estimation
Liddle, Andrew R
2009-01-01
The estimation of cosmological parameters from precision observables is an important industry with crucial ramifications for particle physics. This article discusses the statistical methods presently used in cosmological data analysis, highlighting the main assumptions and uncertainties. The topics covered are parameter estimation, model selection, multi-model inference, and experimental design, all primarily from a Bayesian perspective.
Estimation of Physical Parameters in Linear and Nonlinear Dynamic Systems
DEFF Research Database (Denmark)
Knudsen, Morten
and estimation of physical parameters in particular. 2. To apply the new methods for modelling of specific objects, such as loudspeakers, ac- and dc-motors wind turbines and beat exchangers. A reliable quality measure of an obtained parameter estimate is a prerequisite for any reasonable use of the result...
On Estimating the Parameters of Truncated Trivariate Normal Distributions
Directory of Open Access Journals (Sweden)
M. N. Bhattacharyya
1969-07-01
Full Text Available Maximum likehood estimates of the parameters of a trivariate normal distribution, with single truncation on two-variates, have been derived in this paper. The information matrix has also been given from which the asymptotic variances and covariances might be obtained for the estimates of the parameters of the restricted variables. Numerical examples have been worked out.
Indian Academy of Sciences (India)
G Sasibhushana Rao
2007-10-01
The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results conﬁrm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.
Parameter estimation of hydrologic models using data assimilation
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Parameter and State Estimator for State Space Models
Directory of Open Access Journals (Sweden)
Ruifeng Ding
2014-01-01
Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
Estimation of ground water hydraulic parameters
Energy Technology Data Exchange (ETDEWEB)
Hvilshoej, Soeren
1998-11-01
The main objective was to assess field methods to determine ground water hydraulic parameters and to develop and apply new analysis methods to selected field techniques. A field site in Vejen, Denmark, which previously has been intensively investigated on the basis of a large amount of mini slug tests and tracer tests, was chosen for experimental application and evaluation. Particular interest was in analysing partially penetrating pumping tests and a recently proposed single-well dipole test. Three wells were constructed in which partially penetrating pumping tests and multi-level single-well dipole tests were performed. In addition, multi-level slug tests, flow meter tests, gamma-logs, and geologic characterisation of soil samples were carried out. In addition to the three Vejen analyses, data from previously published partially penetrating pumping tests were analysed assuming homogeneous anisotropic aquifer conditions. In the present study methods were developed to analyse partially penetrating pumping tests and multi-level single-well dipole tests based on an inverse numerical model. The obtained horizontal hydraulic conductivities from the partially penetrating pumping tests were in accordance with measurements obtained from multi-level slug tests and mini slug tests. Accordance was also achieved between the anisotropy ratios determined from partially penetrating pumping tests and multi-level single-well dipole tests. It was demonstrated that the partially penetrating pumping test analysed by and inverse numerical model is a very valuable technique that may provide hydraulic information on the storage terms and the vertical distribution of the horizontal and vertical hydraulic conductivity under both confined and unconfined aquifer conditions. (EG) 138 refs.
PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA
Institute of Scientific and Technical Information of China (English)
QianWeimin; LiYumei
2005-01-01
The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.
An Algorithm for Positive Definite Least Square Estimation of Parameters.
1986-05-01
This document presents an algorithm for positive definite least square estimation of parameters. This estimation problem arises from the PILOT...dynamic macro-economic model and is equivalent to an infinite convex quadratic program. It differs from ordinary least square estimations in that the
Li, Chunming; Gore, John C; Davatzikos, Christos
2014-09-01
This paper proposes a new energy minimization method called multiplicative intrinsic component optimization (MICO) for joint bias field estimation and segmentation of magnetic resonance (MR) images. The proposed method takes full advantage of the decomposition of MR images into two multiplicative components, namely, the true image that characterizes a physical property of the tissues and the bias field that accounts for the intensity inhomogeneity, and their respective spatial properties. Bias field estimation and tissue segmentation are simultaneously achieved by an energy minimization process aimed to optimize the estimates of the two multiplicative components of an MR image. The bias field is iteratively optimized by using efficient matrix computations, which are verified to be numerically stable by matrix analysis. More importantly, the energy in our formulation is convex in each of its variables, which leads to the robustness of the proposed energy minimization algorithm. The MICO formulation can be naturally extended to 3D/4D tissue segmentation with spatial/sptatiotemporal regularization. Quantitative evaluations and comparisons with some popular softwares have demonstrated superior performance of MICO in terms of robustness and accuracy.
Directory of Open Access Journals (Sweden)
Anupam Pathak
2014-11-01
Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed. We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments
Fendick, Kerry
2011-01-01
This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...
Research on the estimation method for Earth rotation parameters
Yao, Yibin
2008-12-01
In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....
Thelen, Brian J.; Paxman, Richard G.
1994-01-01
The method of phase diversity has been used in the context of incoherent imaging to estimate jointly an object that is being imaged and phase aberrations induced by atmospheric turbulence. The method requires a parametric model for the phase-aberration function. Typically, the parameters are coefficients to a finite set of basis functions. Care must be taken in selecting a parameterization that properly balances accuracy in the representation of the phase-aberration function with stability in the estimates. It is well known that over parameterization can result in unstable estimates. Thus a certain amount of model mismatch is often desirable. We derive expressions that quantify the bias and variance in object and aberration estimates as a function of parameter dimension.
Parameter estimation for the Pearson type 3 distribution using order statistics
Rocky Durrans, S.
1992-05-01
The Pearson type 3 distribution and its relatives, the log Pearson type 3 and gamma family of distributions, are among the most widely applied in the field of hydrology. Parameter estimation for these distributions has been accomplished using the method of moments, the methods of mixed moments and generalized moments, and the methods of maximum likelihood and maximum entropy. This study evaluates yet another estimation approach, which is based on the use of the properties of an extreme-order statistic. Based on the hypothesis that the population is distributed as Pearson type 3, this estimation approach yields both parameter and 100-year quantile estimators that have lower biases and variances than those of the method of moments approach as recommended by the US Water Resources Council.
Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky
2016-03-01
Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.
UPDATING AND DOWNDATING FOR PARAMETER ESTIMATION WITH BOUNDED UNCERTAIN DATA
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The bounded parameter estimation problem and its solution lead to moie meaningful results. Its superior performance is due to the fact that the new method guarantees that the effect of the uncertainties will never be unnecessarily overestimated.We then consider how to update and downdate the bounded parameter estimation problem. When updating and downdating of SVD are used to the new problem, special technologies are taken to avoid forming U and V explicitly, then increase the algorithm performance. Because of the link between the bounded parameter estimation and Tikhonov regularization procedure, we point out that our algorithms can also be used to modify regularization problem.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
Calderon, Christopher P.
2013-07-01
Several single-molecule studies aim to reliably extract parameters characterizing molecular confinement or transient kinetic trapping from experimental observations. Pioneering works from single-particle tracking (SPT) in membrane diffusion studies [Kusumi , Biophys. J.BIOJAU0006-349510.1016/S0006-3495(93)81253-0 65, 2021 (1993)] appealed to mean square displacement (MSD) tools for extracting diffusivity and other parameters quantifying the degree of confinement. More recently, the practical utility of systematically treating multiple noise sources (including noise induced by random photon counts) through likelihood techniques has been more broadly realized in the SPT community. However, bias induced by finite-time-series sample sizes (unavoidable in practice) has not received great attention. Mitigating parameter bias induced by finite sampling is important to any scientific endeavor aiming for high accuracy, but correcting for bias is also often an important step in the construction of optimal parameter estimates. In this article, it is demonstrated how a popular model of confinement can be corrected for finite-sample bias in situations where the underlying data exhibit Brownian diffusion and observations are measured with non-negligible experimental noise (e.g., noise induced by finite photon counts). The work of Tang and Chen [J. Econometrics0304-407610.1016/j.jeconom.2008.11.001 149, 65 (2009)] is extended to correct for bias in the estimated “corral radius” (a parameter commonly used to quantify confinement in SPT studies) in the presence of measurement noise. It is shown that the approach presented is capable of reliably extracting the corral radius using only hundreds of discretely sampled observations in situations where other methods (including MSD and Bayesian techniques) would encounter serious difficulties. The ability to accurately statistically characterize transient confinement suggests additional techniques for quantifying confined and/or hop
A new estimate of the parameters in linear mixed models
Institute of Scientific and Technical Information of China (English)
王松桂; 尹素菊
2002-01-01
In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.
Purves, R D
1994-02-01
Noncompartmental investigation of the distribution of residence times from concentration-time data requires estimation of the second noncentral moment (AUM2C) as well as the area under the curve (AUC) and the area under the moment curve (AUMC). The accuracy and precision of 12 numerical integration methods for AUM2C were tested on simulated noisy data sets representing bolus, oral, and infusion concentration-time profiles. The root-mean-squared errors given by the best methods were only slightly larger than the corresponding errors in the estimation of AUC and AUMC. AUM2C extrapolated "tail" areas as estimated from a log-linear fit are biased, but the bias is minimized by application of a simple correction factor. The precision of estimates of variance of residence times (VRT) can be severely impaired by the variance of the extrapolated tails. VRT is therefore not a useful parameter unless the tail areas are small or can be shown to be estimated with little error. Estimates of the coefficient of variation of residence times (CVRT) and its square (CV2) are robust in the sense of being little affected by errors in the concentration values. The accuracy of estimates of CVRT obtained by optimum numerical methods is equal to or better than that of AUC and mean residence time estimates, even in data sets with large tail areas.
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Directory of Open Access Journals (Sweden)
V. R. N. Pauwels
2013-04-01
Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the Discrete Kalman Filter, and the state variables using the Ensemble Kalman Filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware Ensemble Kalman Filter. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
MLE's bias pathology motivates MCMLE
Yatracos, Yannis G.
2013-01-01
Maximum likelihood estimates are often biased. It is shown that this pathology is inherent to the traditional ML estimation method for two or more parameters, thus motivating from a different angle the use of MCMLE.
Another Look at the EWMA Control Chart with Estimated Parameters
Saleh, N.A.; Mahmoud, M.A.; Jones-Farmer, L.A.; Zwetsloot, I.; Woodall, W.H.
2015-01-01
The authors assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated.
The bias of the unbiased estimator: a study of the iterative application of the BLUE method
Lista, Luca
2014-01-01
The best linear unbiased estimator (BLUE) is a popular statistical method adopted to combine multiple measurements of the same observable, taking into account individual uncertainties and their correlation. The method is unbiased by construction if the true uncertainties and their correlation are known, but it may exhibit a bias if uncertainty estimates are used in place of the true ones, in particular if those uncertainties depend on the true value of the measured quantity. This is the case for instance when contributions to the total uncertainty are known as relative uncertainties. In those cases, an iterative application of the BLUE method may reduce the bias of the combined measurement. The impact of the iterative approach compared to the standard BLUE application is studied for a wide range of possible values of uncertainties and their correlation in the case of the combination of two measurements.
Energy Technology Data Exchange (ETDEWEB)
Bispo, Regina; Huso, Manuela; Palminha, Gustavo; Som, Nicholas; Ladd, Lew; Bernardino, Joana; Marques, Tiago A.; Pestana, Dinis
2011-07-01
Full text: In monitoring studies at wind farms, the estimation of bird and bat mortality caused by collision has crucial importance. The estimates of annual fatalities provide information about direct impacts by particular projects, allow comparisons between research studies, enable impact trend studies, provide a basis for legislation and enable the comparison with the impacts caused by other human activities. In order to estimate the mortality rate correctly, the observed number of carcasses must be adjusted both for scavenging removal and for search efficiency. To diminish estimation bias, recent studies advise new statistical procedures regarding the scavenging correction factor (Bispo et al., 2010) and the estimator of fatality (Huso 2010). In this context, the complexity associated with the procedure may hinder its use. Consequently to help final users in applying the proposed methodologies we present an application that provides a friendly interface for the implementation of the statistical procedure in the R Environment for Statistical Computing that ultimately leads to the estimation of fatality. The user must provide the carcass persistence trial data, the searcher efficiency trial data and the gathered carcass data. From those, the application estimates the scavenging removal correction factor based on the best fitted parametric survival model (Bispo et al 2010), and the final output provides fatality estimates using the estimator proposed by Huso (2010). During the conference a laptop will be available to promote participants. hands-on contact with the software. (Author)
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Directory of Open Access Journals (Sweden)
Kurt James Werner
2016-10-01
Full Text Available The magnitude of the Discrete Fourier Transform (DFT of a discrete-time signal has a limited frequency definition. Quadratic interpolation over the three DFT samples surrounding magnitude peaks improves the estimation of parameters (frequency and amplitude of resolved sinusoids beyond that limit. Interpolating on a rescaled magnitude spectrum using a logarithmic scale has been shown to improve those estimates. In this article, we show how to heuristically tune a power scaling parameter to outperform linear and logarithmic scaling at an equivalent computational cost. Although this power scaling factor is computed heuristically rather than analytically, it is shown to depend in a structured way on window parameters. Invariance properties of this family of estimators are studied and the existence of a bias due to noise is shown. Comparing to two state-of-the-art estimators, we show that an optimized power scaling has a lower systematic bias and lower mean-squared-error in noisy conditions for ten out of twelve common windowing functions.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
the parameters, including the noise terms. The parameter estimation method is a maximum likelihood method (ML) where the likelihood function is evaluated using a Kalman filter technique. The ML method estimates the parameters in a prediction error settings, i.e. the sum of squared prediction error is minimized....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Beef quality parameters estimation using ultrasound and color images
Nunes, Jose Luis; Piquerez, Martín; Pujadas, Leonardo; Armstrong,Eileen; Alicia FERNÁNDEZ; Lecumberry, Federico
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. ...
How cognitive biases can distort environmental statistics: introducing the rough estimation task.
Wilcockson, Thomas D W; Pothos, Emmanuel M
2016-04-01
The purpose of this study was to develop a novel behavioural method to explore cognitive biases. The task, called the Rough Estimation Task, simply involves presenting participants with a list of words that can be in one of three categories: appetitive words (e.g. alcohol, food, etc.), neutral related words (e.g. musical instruments) and neutral unrelated words. Participants read the words and are then asked to state estimates for the percentage of words in each category. Individual differences in the propensity to overestimate the proportion of appetitive stimuli (alcohol-related or food-related words) in a word list were associated with behavioural measures (i.e. alcohol consumption, hazardous drinking, BMI, external eating and restrained eating, respectively), thereby providing evidence for the validity of the task. The task was also found to be associated with an eye-tracking attentional bias measure. The Rough Estimation Task is motivated in relation to intuitions with regard to both the behaviour of interest and the theory of cognitive biases in substance use.
Directory of Open Access Journals (Sweden)
Federico Scarpa
2015-01-01
Full Text Available The identification of thermophysical properties of materials in dynamic experiments can be conveniently performed by the inverse solution of the associated heat conduction problem (IHCP. The inverse technique demands the knowledge of the initial temperature distribution within the material. As only a limited number of temperature sensors (or no sensor at all are arranged inside the test specimen, the knowledge of the initial temperature distribution is affected by some uncertainty. This uncertainty, together with other possible sources of bias in the experimental procedure, will propagate in the estimation process and the accuracy of the reconstructed thermophysical property values could deteriorate. In this work the effect on the estimated thermophysical properties due to errors in the initial temperature distribution is investigated along with a practical method to quantify this effect. Furthermore, a technique for compensating this kind of bias is proposed. The method consists in including the initial temperature distribution among the unknown functions to be estimated. In this way the effect of the initial bias is removed and the accuracy of the identified thermophysical property values is highly improved.
A FAST PARAMETER ESTIMATION ALGORITHM FOR POLYPHASE CODED CW SIGNALS
Institute of Scientific and Technical Information of China (English)
Li Hong; Qin Yuliang; Wang Hongqiang; Li Yanpeng; Li Xiang
2011-01-01
A fast parameter estimation algorithm is discussed for a polyphase coded Continuous Waveform (CW) signal in Additive White Gaussian Noise (AWGN).The proposed estimator is based on the sum of the modulus square of the ambiguity function at the different Doppler shifts.An iterative refinement stage is proposed to avoid the effect of the spurious peaks that arise when the summation length of the estimator exceeds the subcode duration.The theoretical variance of the subcode rate estimate is derived.The Monte-Carlo simulation results show that the proposed estimator is highly accurate and effective at moderate Signal-to-Noise Ratio (SNR).
Simultaneous optimal experimental design for in vitro binding parameter estimation.
Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C
2013-10-01
Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples.
A new method for parameter estimation in nonlinear dynamical equations
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Ringard, Justine; Becker, Melanie; Seyler, Frederique; Linguet, Laurent
2016-04-01
Currently satellite-based precipitation estimates exhibit considerable biases, and there have been many efforts to reduce these biases by merging surface gauge measurements with satellite-based estimates. In Guiana Shield all products exhibited better performances during the dry season (August- December). All products greatly overestimate very low intensities (50 mm). Moreover the responses of each product are different according to hydro climatic regimes. The aim of this study is to correct spatially the bias of precipitation, and compare various correction methods to define the best methods depending on the rainfall characteristic correcting (intensity, frequency). Four satellites products are used: Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) research product (3B42V7) and real time product (3B42RT), the Precipitation Estimation from Remotely-Sensed Information using Artificial Neural Network (PERSIANN) and the NOAA Climate Prediction Center (CPC) Morphing technique (CMORPH), for six hydro climatic regimes between 2001 and 2012. Several statistical transformations are used to correct the bias. Statistical transformations attempt to find a function h that maps a simulated variable Ps such that its new distribution equals the distribution of the observed variable Po. The first is the use of a distribution derived transformations which is a mixture of the Bernoulli and the Gamma distribution, where the Bernoulli distribution is used to model the probability of precipitation occurrence and the Gamma distribution used to model precipitation intensities. The second a quantile-quantile relation using parametric transformation, and the last one is a common approach using the empirical CDF of observed and modelled values instead of assuming parametric distributions. For each correction 30% of both, simulated and observed data sets, are used to calibrate and the other part used to validate. The validation are test with statistical
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
On the Nature of SEM Estimates of ARMA Parameters.
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Parameter estimation of hidden periodic model in random fields
Institute of Scientific and Technical Information of China (English)
何书元
1999-01-01
Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Distribution Line Parameter Estimation Under Consideration of Measurement Tolerances
DEFF Research Database (Denmark)
Prostejovsky, Alexander; Gehrke, Oliver; Kosek, Anna Magdalena
2016-01-01
State estimation and control approaches in electric distribution grids rely on precise electric models that may be inaccurate. This work presents a novel method of estimating distribution line parameters using only root mean square voltage and power measurements under consideration of measurement...
Parameter estimation of gravitational wave compact binary coalescences
Haster, Carl-Johan; LIGO Scientific Collaboration Collaboration
2017-01-01
The first detections of gravitational waves from coalescing binary black holes have allowed unprecedented inference on the astrophysical parameters of such binaries. Given recent updates in detector capabilities, gravitational wave model templates and data analysis techniques, in this talk I will describe the prospects of parameter estimation of compact binary coalescences during the second observation run of the LIGO-Virgo collaboration.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
Estimation of shape model parameters for 3D surfaces
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;
2008-01-01
Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...
Estimation of bias errors in measured airplane responses using maximum likelihood method
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Regressions by leaps and bounds and biased estimation techniques in yield modeling
Marquina, N. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. It was observed that OLS was not adequate as an estimation procedure when the independent or regressor variables were involved in multicollinearities. This was shown to cause the presence of small eigenvalues of the extended correlation matrix A'A. It was demonstrated that the biased estimation techniques and the all-possible subset regression could help in finding a suitable model for predicting yield. Latent root regression was an excellent tool that found how many predictive and nonpredictive multicollinearities there were.
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Quantifying lost information due to covariance matrix estimation in parameter inference
Sellentin, Elena; Heavens, Alan F.
2017-02-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.
Parameter Estimation and Experimental Design in Groundwater Modeling
Institute of Scientific and Technical Information of China (English)
SUN Ne-zheng
2004-01-01
This paper reviews the latest developments on parameter estimation and experimental design in the field of groundwater modeling. Special considerations are given when the structure of the identified parameter is complex and unknown. A new methodology for constructing useful groundwater models is described, which is based on the quantitative relationships among the complexity of model structure, the identifiability of parameter, the sufficiency of data, and the reliability of model application.
NEW DOCTORAL DEGREE Parameter estimation problem in the Weibull model
Marković, Darija
2009-01-01
In this dissertation we consider the problem of the existence of best parameters in the Weibull model, one of the most widely used statistical models in reliability theory and life data theory. Particular attention is given to a 3-parameter Weibull model. We have listed some of the many applications of this model. We have described some of the classical methods for estimating parameters of the Weibull model, two graphical methods (Weibull probability plot and hazard plot), and two analyt...
Littenberg, Tyson B; Coughlin, Scott; Kalogera, Vicky
2016-01-01
Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects' spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by $>5\\sigma$ using simple-precession waveforms and in excess of $20\\sigma$ when non-spinning templates are employed This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find searched areas are up to a ...
Global parameter estimation methods for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Poovathingal Suresh
2010-08-01
Full Text Available Abstract Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter
Parameter Estimation of the Extended Vasiček Model
Rujivan, Sanae
2010-01-01
In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability) density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs) of the parameters are obtained by maximizing the appr...
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Optimization of exchange bias in Co/CoO magnetic nanocaps by tuning deposition parameters
Sharma, A.; Tripathi, J.; Ugochukwu, K. C.; Tripathi, S.
2017-03-01
In the present work, we report exchange bias tuning by varying thin film deposition parameters such as synthesis method and underlying layer patterning. The patterned substrates for this study were prepared by self-assembly of polystyrene (PS) latex spheres ( 530 nm) on Si (100) substrate. The desired magnetic nanocaps composed of CoO/Co bilayer film on these patterned substrates were prepared by molecular beam epitaxy technique under ultra-high vacuum conditions. For this, a Co layer of 10 nm thickness was deposited on the substrates and then oxidized in-situ to form CoO/Co/PS in-situ oxidized film or ex-situ in ambiance which also gives CoO/Co/PS naturally oxidized film. Simultaneously, reference thin films of Co ( 10 nm) were also prepared on plane Si substrate and similar oxidation treatments were performed on them respectively. The magnetic properties studied using SQUID technique revealed higher exchange bias ( 1736 Oe) in the in-situ oxidized Co/PS film as compared to that in naturally oxidized Co/PS film ( 1544 Oe) and also compared to the reference film. The observed variations in the magnetic properties are explained in terms of surface patterning induced structural changes of the deposited films and different oxidation methods.
Estimation of the input parameters in the Feller neuronal model
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Estimating Illumination Parameters Using Spherical Harmonics Coefficients in Frequency Space
Institute of Scientific and Technical Information of China (English)
XIE Feng; TAO Linmi; XU Guangyou
2007-01-01
An algorithm is presented for estimating the direction and strength of point light with the strength of ambient illumination. Existing approaches evaluate these illumination parameters directly in the high dimensional image space, while we estimate the parameters in two steps:first by projecting the image to an orthogonal linear subspace based on spherical harmonic basis functions and then by calculating the parameters in the low dimensional subspace.The test results using the CMU PIE database and Yale Database B show the stability and effectiveness of the method.The resulting illumination information can be used to synthesize more realistic relighting images and to recognize objects under variable illumination.
Directory of Open Access Journals (Sweden)
Zhuo Qi Lee
Full Text Available Biased random walk has been studied extensively over the past decade especially in the transport and communication networks communities. The mean first passage time (MFPT of a biased random walk is an important performance indicator in those domains. While the fundamental matrix approach gives precise solution to MFPT, the computation is expensive and the solution lacks interpretability. Other approaches based on the Mean Field Theory relate MFPT to the node degree alone. However, nodes with the same degree may have very different local weight distribution, which may result in vastly different MFPT. We derive an approximate bound to the MFPT of biased random walk with short relaxation time on complex network where the biases are controlled by arbitrarily assigned node weights. We show that the MFPT of a node in this general case is closely related to not only its node degree, but also its local weight distribution. The MFPTs obtained from computer simulations also agree with the new theoretical analysis. Our result enables fast estimation of MFPT, which is useful especially to differentiate between nodes that have very different local node weight distribution even though they share the same node degrees.
Robust Parameter and Signal Estimation in Induction Motors
DEFF Research Database (Denmark)
Børsting, H.
in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods......-time approximation. All methods and theories have been evaluated on the basis of experimental results obtained from measurements on a laboratory setup. Standard methods have been modified and combined to obtain usable solutions to the estimation problems. The major results of the work can be summarized as follows......: - identifiability has been treated in theory and practice in connection with parameter and signal estimation in induction motors. - a non recursive prediction error method has successfully been used to estimate physical related parameters in a continuous-time model of the induction motor. The speed of the rotor has...
Ma, Yanyuan
2013-09-01
We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.
Iterative methods for distributed parameter estimation in parabolic PDE
Energy Technology Data Exchange (ETDEWEB)
Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Parameter Estimation of Damped Compound Pendulum Using Bat Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available In this study, the parameter identification of the damped compound pendulum system is proposed using one of the most promising nature inspired algorithms which is Bat Algorithm (BA. The procedure used to achieve the parameter identification of the experimental system consists of input-output data collection, ARX model order selection and parameter estimation using bat algorithm (BA method. PRBS signal is used as an input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the autoregressive with exogenous input (ARX model. The performance of the model is validated using mean squares error (MSE between the actual and predicted output responses of the models. Finally, comparative study is conducted between BA and the conventional estimation method (i.e. Least Square. Based on the results obtained, MSE produce from Bat Algorithm (BA is outperformed the Least Square (LS method.
Dynamic Load Model using PSO-Based Parameter Estimation
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Budic, Lara; Didenko, Gregor; Dormann, Carsten F
2016-01-01
In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be
Directory of Open Access Journals (Sweden)
Colin Southwell
Full Text Available Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.
Espinoza, Néstor
2015-01-01
Limb-darkening is fundamental in determining transit lightcurve shapes, and is typically modeled by a variety of laws that parametrize the intensity profile of the star that is being transited. Confronted with a transit lightcurve, some authors fix the parameters of these laws, the so-called limb-darkening coefficients (LDCs), while others prefer to let them float in the lightcurve fitting procedure. Which of these is the best strategy, however, is still unclear, as well as how and by how much each of these can bias the retrieved transit parameters. In this work we attempt to clarify those points by first re-calculating these LDCs, comparing them to measured values from Kepler transit lightcurves using an algorithm that takes into account uncertainties in both the geometry of the transit and the parameters of the stellar host. We show there are significant departures from predicted model values, suggesting that our understanding of limb-darkening still needs to improve. Then, we show through simulations that ...
Institute of Scientific and Technical Information of China (English)
Li Wen XU; Song Gui WANG
2007-01-01
In this paper, the authors address the problem of the minimax estimator of linear com-binations of stochastic regression coefficients and parameters in the general normal linear model with random effects. Under a quadratic loss function, the minimax property of linear estimators is inves- tigated. In the class of all estimators, the minimax estimator of estimable functions, which is unique with probability 1, is obtained under a multivariate normal distribution.
Abate, Alexandra; Bridle, Sarah; Teodoro, Luis F. A.; Warren, Michael S.; Hendry, Martin
2008-10-01
We investigate methods to best estimate the normalization of the mass density fluctuation power spectrum (σ8) using peculiar velocity data from a survey like the six-degree Field Galaxy Velocity Survey (6dFGSv). We focus on two potential problems: (i) biases from non-linear growth of structure and (ii) the large number of velocities in the survey. Simulations of ΛCDM-like models are used to test the methods. We calculate the likelihood from a full covariance matrix of velocities averaged in grid cells. This simultaneously reduces the number of data points and smoothes out non-linearities which tend to dominate on small scales. We show how the averaging can be taken into account in the predictions in a practical way, and show the effect of the choice of cell size. We find that a cell size can be chosen that significantly reduces the non-linearities without significantly increasing the error bars on cosmological parameters. We compare our results with those from a principal components analysis following Watkins et al. and Feldman et al. to select a set of optimal moments constructed from linear combinations of the peculiar velocities that are least sensitive to the non-linear scales. We conclude that averaging in grid cells performs equally well. We find that for a survey such as 6dFGSv we can estimate σ8 with less than 3 per cent bias from non-linearities. The expected error on σ8 after marginalizing over Ωm is approximately 16 per cent.
A software for parameter estimation in dynamic models
Directory of Open Access Journals (Sweden)
M. Yuceer
2008-12-01
Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.
Gharamti, M. E.; Tjiputra, J.; Bethke, I.; Samuelsen, A.; Skjelvan, I.; Bentsen, M.; Bertino, L.
2017-04-01
We develop an efficient data assimilation system that aims at quantifying the uncertainties of various biogeochemical states and parameters. We explore the use of four different ensemble estimation techniques for tuning poorly constrained ecosystem parameters using a one-dimensional configuration of the Ocean Biogeochemical General Circulation Model. The schemes are all EnKF-based operating sequentially in time but have different correction equations. The 1D model is used to simulate the biogeochemical cycle at three different stations in mid and high latitudes. We assimilate monthly climatological profiles of nitrate, silicate, phosphate and oxygen in addition to seasonal surface pCO2 data, between 2006 and 2010. We use the data to optimize eleven ecosystem parameters in addition to all state variables of the model, describing the dynamical processes of the water column. Our assimilation results suggest the following: (1) Among all tested schemes, the one-step-ahead smoothing-based ensemble Kalman filter (OSA-EnKF) is robust and the most accurate, providing consistent and reliable state-parameter ensemble realizations. (2) Given the large uncertainties associated with the ecosystem parameters, estimating only the state variables is generally inconclusive and biased. (3) The OSA-EnKF successfully recovers the observed seasonal variability of the ecosystem dynamics at all stations and helps optimizing the parameters, eventually reducing the prediction errors of the nutrients' concentrations. (4) The estimates of the parameters may have some temporally correlated features and they can also vary spatially between different regions depending on the magnitude of the bias in the observed variables and other factors such as the intensity of the bloom period. We further show that the presented assimilation system has the potential to be used in global models.
Cefalu, Matthew; Dominici, Francesca
2014-07-01
In environmental epidemiology, we are often faced with 2 challenges. First, an exposure prediction model is needed to estimate the exposure to an agent of interest, ideally at the individual level. Second, when estimating the health effect associated with the exposure, confounding adjustment is needed in the health-effects regression model. The current literature addresses these 2 challenges separately. That is, methods that account for measurement error in the predicted exposure often fail to acknowledge the possibility of confounding, whereas methods designed to control confounding often fail to acknowledge that the exposure has been predicted. In this article, we consider exposure prediction and confounding adjustment in a health-effects regression model simultaneously. Using theoretical arguments and simulation studies, we show that the bias of a health-effect estimate is influenced by the exposure prediction model, the type of confounding adjustment used in the health-effects regression model, and the relationship between these 2. Moreover, we argue that even with a health-effects regression model that properly adjusts for confounding, the use of a predicted exposure can bias the health-effect estimate unless all confounders included in the health-effects regression model are also included in the exposure prediction model. While these results of this article were motivated by studies of environmental contaminants, they apply more broadly to any context where an exposure needs to be predicted.
The importance of estimating selection bias on prevalence estimates, shortly after a disaster.
Grievink, L.; Velden, P.G. van der; Yzermans, C.J.; Roorda, J.; Stellato, R.K.
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data we
The importance of estimating selection bias on prevalence estimates shortly after a disaster.
Grievink, Linda; Velden, Peter G van der; Yzermans, C Joris; Roorda, Jan; Stellato, Rebecca K
2006-01-01
PURPOSE: The aim was to study selective participation and its effect on prevalence estimates in a health survey of affected residents 3 weeks after a man-made disaster in The Netherlands (May 13, 2000). METHODS: All affected adult residents were invited to participate. Survey (questionnaire) data we
Estimation of bias and variance of measurements made from tomography scans
Bradley, Robert S.
2016-09-01
Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.
Cho, Hee-Suk
2015-01-01
We study the validity of the inspiral templates in gravitational wave data analysis for nonspinning binary black holes with Advanced LIGO sensitivity. We use the phenomenological waveform model, which contains the inspiral-merger-ring down (IMR) phases defined in the Fourier domain. For parameter estimation purposes, we calculate the statistical errors assuming the IMR signals and IMR templates for the binaries with total masses M $\\leq$ 30Msun. Especially, we explore the systematic biases caused by a mismatch between the IMR signal model (IMR) and inspiral template model (Imerg), and investigate the impact on the parameter estimation accuracy by comparing the biases with the statistical errors. For detection purposes, we calculate the fitting factors of the inspiral templates with respect to the IMR signals. We find that the valid criteria for Imerg templates are obtained by Mcrit ~ 24Msun (if M < Mcrit, the fitting factor is higher than 0.97) for detection and M < 26Msun (where the systematic bias is ...
Albers, DJ
2011-01-01
A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Traveltime approximations and parameter estimation for orthorhombic media
Masmoudi, Nabil
2016-05-30
Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.
Bias and robustness of uncertainty components estimates in transient climate projections
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias
Adaptive distributed parameter and input estimation in linear parabolic PDEs
Mechhoud, Sarra
2016-01-01
In this paper, we discuss the on-line estimation of distributed source term, diffusion, and reaction coefficients of a linear parabolic partial differential equation using both distributed and interior-point measurements. First, new sufficient identifiability conditions of the input and the parameter simultaneous estimation are stated. Then, by means of Lyapunov-based design, an adaptive estimator is derived in the infinite-dimensional framework. It consists of a state observer and gradient-based parameter and input adaptation laws. The parameter convergence depends on the plant signal richness assumption, whereas the state convergence is established using a Lyapunov approach. The results of the paper are illustrated by simulation on tokamak plasma heat transport model using simulated data.
DEFF Research Database (Denmark)
De Bruin, M L; van Puijenbroek, E P; Egberts, A C G
2002-01-01
AIMS: This study used spontaneous reports of adverse events to estimate the risk for developing cardiac arrhythmias due to the systemic use of non-sedating antihistamine drugs and compared the risk estimate before and after the regulatory action to recall the over-the-counter status of some...... was not significantly higher than 1 (OR 1.37 [95% CI: 0.85, 2.23]), whereas the risk estimate calculated after the governmental decision did significantly differ from 1 (OR 4.19 [95% CI: 2.49, 7.05]). CONCLUSIONS: Our data suggest that non-sedating antihistamines might have an increased risk for inducing arrhythmias....... Our findings, however, strongly suggest that the increased risk identified can at least partly be explained by reporting bias as a result of publications about and mass media attention for antihistamine induced arrhythmias....
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution.
Interval Estimations of the Two-Parameter Exponential Distribution
Directory of Open Access Journals (Sweden)
Lai Jiang
2012-01-01
Full Text Available In applied work, the two-parameter exponential distribution gives useful representations of many physical situations. Confidence interval for the scale parameter and predictive interval for a future independent observation have been studied by many, including Petropoulos (2011 and Lawless (1977, respectively. However, interval estimates for the threshold parameter have not been widely examined in statistical literature. The aim of this paper is to, first, obtain the exact significance function of the scale parameter by renormalizing the p∗-formula. Then the approximate Studentization method is applied to obtain the significance function of the threshold parameter. Finally, a predictive density function of the two-parameter exponential distribution is derived. A real-life data set is used to show the implementation of the method. Simulation studies are then carried out to illustrate the accuracy of the proposed methods.
Parameter Estimation of the Extended Vasiček Model
Directory of Open Access Journals (Sweden)
Sanae RUJIVAN
2010-01-01
Full Text Available In this paper, an estimate of the drift and diffusion parameters of the extended Vasiček model is presented. The estimate is based on the method of maximum likelihood. We derive a closed-form expansion for the transition (probability density of the extended Vasiček process and use the expansion to construct an approximate log-likelihood function of a discretely sampled data of the process. Approximate maximum likelihood estimators (AMLEs of the parameters are obtained by maximizing the approximate log-likelihood function. The convergence of the AMLEs to the true maximum likelihood estimators is obtained by increasing the number of terms in the expansions with a small time step size.
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.
Directory of Open Access Journals (Sweden)
Baker Syed
2011-01-01
Full Text Available Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF, rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-10-11
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison.
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Parameter Estimation of Photovoltaic Models via Cuckoo Search
Directory of Open Access Journals (Sweden)
Jieming Ma
2013-01-01
Full Text Available Since conventional methods are incapable of estimating the parameters of Photovoltaic (PV models with high accuracy, bioinspired algorithms have attracted significant attention in the last decade. Cuckoo Search (CS is invented based on the inspiration of brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior. In this paper, a CS-based parameter estimation method is proposed to extract the parameters of single-diode models for commercial PV generators. Simulation results and experimental data show that the CS algorithm is capable of obtaining all the parameters with extremely high accuracy, depicted by a low Root-Mean-Squared-Error (RMSE value. The proposed method outperforms other algorithms applied in this study.
Towards predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
Parameter Estimation of Damped Compound Pendulum Differential Evolution Algorithm
Directory of Open Access Journals (Sweden)
Saad Mohd Sazli
2016-01-01
Full Text Available This paper present the parameter identification of damped compound pendulum using differential evolution algorithm. The procedure used to achieve the parameter identification of the experimental system consisted of input output data collection, ARX model order selection and parameter estimation using conventional method least square (LS and differential evolution (DE algorithm. PRBS signal is used to be input signal to regulate the motor speed. Whereas, the output signal is taken from position sensor. Both, input and output data is used to estimate the parameter of the ARX model. The residual error between the actual and predicted output responses of the models is validated using mean squares error (MSE. Analysis showed that, MSE value for LS is 0.0026 and MSE value for DE is 3.6601×10-5. Based results obtained, it was found that DE have lower MSE than the LS method.
Parameter estimation of an aeroelastic aircraft using neural networks
Indian Academy of Sciences (India)
S C Raisinghani; A K Ghosh
2000-04-01
Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of predicting generalized force and moment coefficients using measured motion and control variables only, without any need for conventional normal elastic variables ortheirtime derivatives, is proposed. Furthermore, it is shown that such a neural model can be used to extract equivalent stability and control derivatives of a flexible aircraft. Results are presented for aircraft with different levels of flexibility to demonstrate the utility ofthe neural approach for both modelling and estimation of parameters.
MPEG2 video parameter and no reference PSNR estimation
DEFF Research Database (Denmark)
Li, Huiying; Forchhammer, Søren
2009-01-01
MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access...... to the MPEG stream. This may be used in systems and applications where the coded stream is not accessible. Detection of MPEG I-frames and DCT (discrete cosine transform) block size is presented. For the I-frames, the quantization parameters are estimated. Combining these with statistics of the reconstructed...
Parameter Estimation in Stochastic Differential Equations; An Overview
DEFF Research Database (Denmark)
Nielsen, Jan Nygaard; Madsen, Henrik; Young, P. C.
2000-01-01
This paper presents an overview of the progress of research on parameter estimation methods for stochastic differential equations (mostly in the sense of Ito calculus) over the period 1981-1999. These are considered both without measurement noise and with measurement noise, where the discretely...... observed stochastic differential equations are embedded in a continuous-discrete time state space model. Every attempts has been made to include results from other scientific disciplines. Maximum likelihood estimation of parameters in nonlinear stochastic differential equations is in general not possible...
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Robust Nonlinear Regression in Enzyme Kinetic Parameters Estimation
Directory of Open Access Journals (Sweden)
Maja Marasović
2017-01-01
Full Text Available Accurate estimation of essential enzyme kinetic parameters, such as Km and Vmax, is very important in modern biology. To this date, linearization of kinetic equations is still widely established practice for determining these parameters in chemical and enzyme catalysis. Although simplicity of linear optimization is alluring, these methods have certain pitfalls due to which they more often then not result in misleading estimation of enzyme parameters. In order to obtain more accurate predictions of parameter values, the use of nonlinear least-squares fitting techniques is recommended. However, when there are outliers present in the data, these techniques become unreliable. This paper proposes the use of a robust nonlinear regression estimator based on modified Tukey’s biweight function that can provide more resilient results in the presence of outliers and/or influential observations. Real and synthetic kinetic data have been used to test our approach. Monte Carlo simulations are performed to illustrate the efficacy and the robustness of the biweight estimator in comparison with the standard linearization methods and the ordinary least-squares nonlinear regression. We then apply this method to experimental data for the tyrosinase enzyme (EC 1.14.18.1 extracted from Solanum tuberosum, Agaricus bisporus, and Pleurotus ostreatus. The results on both artificial and experimental data clearly show that the proposed robust estimator can be successfully employed to determine accurate values of Km and Vmax.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Estimation of regional pulmonary perfusion parameters from microfocal angiograms
Clough, Anne V.; Al-Tinawi, Amir; Linehan, John H.; Dawson, Christopher A.
1995-05-01
An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretical methods for determining regional blood flow, volume, and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries, and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time- absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.
Parameter Estimation Technique of Nonlinear Prosthetic Hand System
Directory of Open Access Journals (Sweden)
M.H.Jali
2016-10-01
Full Text Available This paper illustrated the parameter estimation technique of motorized prosthetic hand system. Prosthetic hands have become importance device to help amputee to gain a normal functional hand. By integrating various types of actuators such as DC motor, hydraulic and pneumatic as well as mechanical part, a highly useful and functional prosthetic device can be produced. One of the first steps to develop a prosthetic device is to design a control system. Mathematical modeling is derived to ease the control design process later on. This paper explained the parameter estimation technique of a nonlinear dynamic modeling of the system using Lagrangian equation. The model of the system is derived by considering the energies of the finger when it is actuated by the DC motor. The parameter estimation technique is implemented using Simulink Design Optimization toolbox in MATLAB. All the parameters are optimized until it achieves a satisfactory output response. The results show that the output response of the system with parameter estimation value produces a better response compare to the default value
CADLIVE optimizer: web-based parameter estimation for dynamic models
Directory of Open Access Journals (Sweden)
Inoue Kentaro
2012-08-01
Full Text Available Abstract Computer simulation has been an important technique to capture the dynamics of biochemical networks. In most networks, however, few kinetic parameters have been measured in vivo because of experimental complexity. We develop a kinetic parameter estimation system, named the CADLIVE Optimizer, which comprises genetic algorithms-based solvers with a graphical user interface. This optimizer is integrated into the CADLIVE Dynamic Simulator to attain efficient simulation for dynamic models.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is...
Human ECG signal parameters estimation during controlled physical activity
Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz
2015-09-01
ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Ocean wave parameters and spectrum estimated from single and dual high-frequency radar systems
Hisaki, Yukiharu
2016-09-01
The high-frequency (HF) radar inversion algorithm for spectrum estimation (HIAS) can estimate ocean wave directional spectra from both dual and single radar. Wave data from a dual radar and two single radars are compared with in situ observations. The agreement of the wave parameters estimated from the dual radar with those from in situ observations is the best of the three. In contrast, the agreement of the wave parameters estimated from the single radar in which no Doppler spectra are observed in the cell closest to the in situ observation point is the worst among the three. Wave data from the dual radar and the two single radars are compared. The comparison of the wave heights estimated from the single and dual radars shows that the area sampled by the Doppler spectra for the single radar is more critical than the number of Doppler spectra in terms of agreement with the dual-radar-estimated wave heights. In contrast, the comparison of the wave periods demonstrates that the number of Doppler spectra observed by the single radar is more critical for agreement of the wave periods than the area of the Doppler spectra. There is a bias directed to the radar position in the single radar estimated wave direction.
Parameter estimation and investigation of a bolted joint model
Shiryayev, O. V.; Page, S. M.; Pettit, C. L.; Slater, J. C.
2007-11-01
Mechanical joints are a primary source of variability in the dynamics of built-up structures. Physical phenomena in the joint are quite complex and therefore too impractical to model at the micro-scale. This motivates the development of lumped parameter joint models with discrete interfaces so that they can be easily implemented in finite element codes. Among the most important considerations in choosing a model for dynamically excited systems is its ability to model energy dissipation. This translates into the need for accurate and reliable methods to measure model parameters and estimate their inherent variability from experiments. The adjusted Iwan model was identified as a promising candidate for representing joint dynamics. Recent research focused on this model has exclusively employed impulse excitation in conjunction with neural networks to identify the model parameters. This paper presents an investigation of an alternative parameter estimation approach for the adjusted Iwan model, which employs data from oscillatory forcing. This approach is shown to produce parameter estimates with precision similar to the impulse excitation method for a range of model parameters.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
GLONASS fractional-cycle bias estimation across inhomogeneous receivers for PPP ambiguity resolution
Geng, Jianghui; Bock, Yehuda
2016-04-01
The key issue to enable precise point positioning with ambiguity resolution (PPP-AR) is to estimate fractional-cycle biases (FCBs), which mainly relate to receiver and satellite hardware biases, over a network of reference stations. While this has been well achieved for GPS, FCB estimation for GLONASS is difficult because (1) satellites do not share the same frequencies as a result of Frequency Division Multiple Access (FDMA) signals; (2) and even worse, pseudorange hardware biases of receivers vary in an irregular manner with manufacturers, antennas, domes, firmware, etc., which especially complicates GLONASS PPP-AR over inhomogeneous receivers. We propose a general approach where external ionosphere products are introduced into GLONASS PPP to estimate precise FCBs that are less impaired by pseudorange hardware biases of diverse receivers to enable PPP-AR. One month of GLONASS data at about 550 European stations were processed. From an exemplary network of 51 inhomogeneous receivers, including four receiver types with various antennas and spanning about 800 km in both longitudinal and latitudinal directions, we found that 92.4 % of all fractional parts of GLONASS wide-lane ambiguities agree well within ± 0.15 cycles with a standard deviation of 0.09 cycles if global ionosphere maps (GIMs) are introduced, compared to only 51.7 % within ± 0.15 cycles and a larger standard deviation of 0.22 cycles otherwise. Hourly static GLONASS PPP-AR at 40 test stations can reach position estimates of about 1 and 2 cm in RMS from ground truth for the horizontal and vertical components, respectively, which is comparable to hourly GPS PPP-AR. Integrated GLONASS and GPS PPP-AR can further achieve an RMS of about 0.5 cm in horizontal and 1-2 cm in vertical components. We stress that the performance of GLONASS PPP-AR across inhomogeneous receivers depends on the accuracy of ionosphere products. GIMs have a modest accuracy of only 2-8 TECU (Total Electron Content Unit) in vertical
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...
Parameter identification and slip estimation of induction machine
Orman, Maciej; Orkisz, Michal; Pinto, Cajetan T.
2011-05-01
This paper presents a newly developed algorithm for induction machine rotor speed estimation and parameter detection. The proposed algorithm is based on spectrum analysis of the stator current. The main idea is to find the best fit of motor parameters and rotor slip with the group of characteristic frequencies which are always present in the current spectrum. Rotor speed and parameters such as pole pairs or number of rotor slots are the results of the presented algorithm. Numerical calculations show that the method yields very accurate results and can be an important part of machine monitoring systems.
Low Complexity Parameter Estimation For Off-the-Grid Targets
Jardak, Seifallah
2015-10-05
In multiple-input multiple-output radar, to estimate the reflection coefficient, spatial location, and Doppler shift of a target, a derived cost function is usually evaluated and optimized over a grid of points. The performance of such algorithms is directly affected by the size of the grid: increasing the number of points will enhance the resolution of the algorithm but exponentially increase its complexity. In this work, to estimate the parameters of a target, a reduced complexity super resolution algorithm is proposed. For off-the-grid targets, it uses a low order two dimensional fast Fourier transform to determine a suboptimal solution and then an iterative algorithm to jointly estimate the spatial location and Doppler shift. Simulation results show that the mean square estimation error of the proposed estimators achieve the Cram\\'er-Rao lower bound. © 2015 IEEE.
Directory of Open Access Journals (Sweden)
Wang Cong
2016-01-01
Full Text Available Because of the poor radio frequency coil uniformity and gradient-driven eddy currents, there is much noise and intensity inhomogeneity (bias in brain magnetic resonance (MR image, and it severely affects the segmentation accuracy. Better segmentation results are difficult to achieve by traditional methods; therefore, in this paper, a modified brain MR image segmentation and bias field estimation model based on local and global information is proposed. We first construct local constraints including image neighborhood information in Gaussian kernel mapping space, and then the complete regularization is established by introducing nonlocal spatial information of MR image. The weighting between local and global information is automatically adjusted according to image local information. At the same time, bias field information is coupled with the model, and it makes the model reduce noise interference but also can effectively estimate the bias field information. Experimental results demonstrate that the proposed algorithm has strong robustness to noise and bias field is well corrected.
Influence of parameter estimation uncertainty in Kriging: Part 2 - Test and case study applications
Directory of Open Access Journals (Sweden)
E. Todini
2001-01-01
Full Text Available The theoretical approach introduced in Part 1 is applied to a numerical example and to the case of yearly average precipitation estimation over the Veneto Region in Italy. The proposed methodology was used to assess the effects of parameter estimation uncertainty on Kriging estimates and on their estimated error variance. The Maximum Likelihood (ML estimator proposed in Part 1, was applied to the zero mean deviations from yearly average precipitation over the Veneto Region in Italy, obtained after the elimination of a non-linear drift with elevation. Three different semi-variogram models were used, namely the exponential, the Gaussian and the modified spherical, and the relevant biases as well as the increases in variance have been assessed. A numerical example was also conducted to demonstrate how the procedure leads to unbiased estimates of the random functions. One hundred sets of 82 observations were generated by means of the exponential model on the basis of the parameter values identified for the Veneto Region rainfall problem and taken as characterising the true underlining process. The values of parameter and the consequent cross-validation errors, were estimated from each sample. The cross-validation errors were first computed in the classical way and then corrected with the procedure derived in Part 1. Both sets, original and corrected, were then tested, by means of the Likelihood ratio test, against the null hypothesis of deriving from a zero mean process with unknown covariance. The results of the experiment clearly show the effectiveness of the proposed approach. Keywords: yearly rainfall, maximum likelihood, Kriging, parameter estimation uncertainty
Cubic spline approximation techniques for parameter estimation in distributed systems
Banks, H. T.; Crowley, J. M.; Kunisch, K.
1983-01-01
Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.
On Modal Parameter Estimates from Ambient Vibration Tests
DEFF Research Database (Denmark)
Agneni, A.; Brincker, Rune; Coppotelli, B.
2004-01-01
Modal parameter estimates from ambient vibration testing are turning into the preferred technique when one is interested in systems under actual loadings and operational conditions. Moreover, with this approach, expensive devices to excite the structure are not needed, since it can be adequately...
Estimation of coal quality parameters using disjunctive kriging
Energy Technology Data Exchange (ETDEWEB)
Tercan, A.E. [Hacettepe University, Department of Mining Engineering, Beytepe (Turkey)
1998-07-01
Disjunctive kriging is a nonlinear estimation technique that allows the conditional probability that the value of coal quality parameter is greater than a cutoff value. The method can be used in management decision making to help control blending and make coal quality sampling. The use of disjunctive kriging is illustrated using the data from Kangal coal deposit. 7 refs.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
IRT parameter estimation with response times as collateral information
Linden, W.J. van der; RKlein Entink, R.H.; Fox, J.-P.
2010-01-01
Hierarchical modeling of responses and response times on test items facilitates the use of response times as collateral information in the estimation of the response parameters. In addition to the regular information in the response data, two sources of collateral information are identified: (a) the
Online vegetation parameter estimation using passive microwave remote sensing observations
In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...
Visco-piezo-elastic parameter estimation in laminated plate structures
DEFF Research Database (Denmark)
Araujo, A. L.; Mota Soares, C. M.; Herskovits, J.;
2009-01-01
A parameter estimation technique is presented in this article, for identification of elastic, piezoelectric and viscoelastic properties of active laminated composite plates with surface-bonded piezoelectric patches. The inverse method presented uses experimental data in the form of a set of measu...
DEFF Research Database (Denmark)
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik;
1995-01-01
and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....
Hybrid fault diagnosis of nonlinear systems using neural parameter estimators.
Sobhani-Tehrani, E; Talebi, H A; Khorasani, K
2014-02-01
This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems taking advantage of both the system's mathematical model and the adaptive nonlinear approximation capability of computational intelligence techniques. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPEs) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FPs) that are indicators of faults in the system. Two NPE structures, series-parallel and parallel, are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. In contrast, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the two NPEs that originally assumes full state measurements for systems that have only partial state measurements. The proposed FTO is a neural state estimator that can estimate unmeasured states even in the presence of faults. The estimated and the measured states then comprise the inputs to the two proposed FDII schemes. Simulation results for FDII of reaction wheels of a three-axis stabilized satellite in the presence of disturbances and noise demonstrate the effectiveness of the proposed FDII solutions under partial state measurements.
PARAMETER ESTIMATION METHODOLOGY FOR NONLINEAR SYSTEMS: APPLICATION TO INDUCTION MOTOR
Institute of Scientific and Technical Information of China (English)
G.KENNE; F.FLORET; H.NKWAWO; F.LAMNABHI-LAGARRIGUE
2005-01-01
This paper deals with on-line state and parameter estimation of a reasonably large class of nonlinear continuous-time systems using a step-by-step sliding mode observer approach. The method proposed can also be used for adaptation to parameters that vary with time. The other interesting feature of the method is that it is easily implementable in real-time. The efficiency of this technique is demonstrated via the on-line estimation of the electrical parameters and rotor flux of an induction motor. This application is based on the standard model of the induction motor expressed in rotor coordinates with the stator current and voltage as well as the rotor speed assumed to be measurable.Real-time implementation results are then reported and the ability of the algorithm to rapidly estimate the motor parameters is demonstrated. These results show the robustness of this approach with respect to measurement noise, discretization effects, parameter uncertainties and modeling inaccuracies.Comparisons between the results obtained and those of the classical recursive least square algorithm are also presented. The real-time implementation results show that the proposed algorithm gives better performance than the recursive least square method in terms of the convergence rate and the robustness with respect to measurement noise.
Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea
Sawlan, Zaid A
2012-12-01
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
Cosmological parameter estimation with free-form primordial power spectrum
Hazra, Dhiraj Kumar; Souradeep, Tarun
2013-01-01
Constraints on the main cosmological parameters using CMB or large scale structure data are usually based on power-law assumption of the primordial power spectrum (PPS). However, in the absence of a preferred model for the early universe, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed power-law form of PPS. In this paper, for the first time, we perform cosmological parameter estimation allowing the free form of the primordial spectrum. This is in fact the most general approach to estimate cosmological parameters without assuming any particular form for the primordial spectrum. We use direct reconstruction of the PPS for any point in the cosmological parameter space using recently modified Richardson-Lucy algorithm however other alternative reconstruction methods could be used for this purpose as well. We use WMAP 9 year data in our analysis considering CMB lensing effect and we report, for the first time, that the flat spatial universe with no cosmol...
Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images
Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.
2016-04-01
Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.
Knapczyk, Frances N; Conner, Jeffrey K
2007-10-01
Kingsolver et al.'s review of phenotypic selection gradients from natural populations provided a glimpse of the form and strength of selection in nature and how selection on different organisms and traits varies. Because this review's underlying database could be a key tool for answering fundamental questions concerning natural selection, it has spawned discussion of potential biases inherent in the review process. Here, we explicitly test for two commonly discussed sources of bias: sampling error and publication bias. We model the relationship between variance among selection gradients and sample size that sampling error produces by subsampling large empirical data sets containing measurements of traits and fitness. We find that this relationship was not mimicked by the review data set and therefore conclude that sampling error does not bias estimations of the average strength of selection. Using graphical tests, we find evidence for bias against publishing weak estimates of selection only among very small studies (N<38). However, this evidence is counteracted by excess weak estimates in larger studies. Thus, estimates of average strength of selection from the review are less biased than is often assumed. Devising and conducting straightforward tests for different biases allows concern to be focused on the most troublesome factors.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Directory of Open Access Journals (Sweden)
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.
Boonvisut, Pasu; Cavuşoğlu, M Cenk
2013-10-01
Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.
Power Network Parameter Estimation Method Based on Data Mining Technology
Institute of Scientific and Technical Information of China (English)
ZHANG Qi-ping; WANG Cheng-min; HOU Zhi-fian
2008-01-01
The parameter values which actually change with the circumstances, weather and load level etc.produce great effect to the result of state estimation. A new parameter estimation method based on data mining technology was proposed. The clustering method was used to classify the historical data in supervisory control and data acquisition (SCADA) database as several types. The data processing technology was impliedto treat the isolated point, missing data and yawp data in samples for classified groups. The measurement data which belong to each classification were introduced to the linear regression equation in order to gain the regression coefficient and actual parameters by the least square method. A practical system demonstrates the high correctness, reliability and strong practicability of the proposed method.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-01
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2012-03-01
Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.
Directory of Open Access Journals (Sweden)
Sander MJ van Kuijk
2016-03-01
Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.
Zhang, Zhang
2012-03-22
Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.
DEFF Research Database (Denmark)
Kaerlev, Linda; Kolstad, Henrik; Hansen, Åse Marie;
2011-01-01
Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population.......Low participation in population-based follow-up studies addressing psychosocial risk factors may cause biased estimation of health risk but the issue has seldom been examined. We compared risk estimates for selected health outcomes among respondents and the entire source population....
Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank
2017-03-01
Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.
Influence of measurement errors and estimated parameters on combustion diagnosis
Energy Technology Data Exchange (ETDEWEB)
Payri, F.; Molina, S.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n. 46022 Valencia (Spain); Armas, O. [Departamento de Mecanica Aplicada e Ingenieria de proyectos, Universidad de Castilla-La Mancha. Av. Camilo Jose Cela s/n 13071,Ciudad Real (Spain)
2006-02-01
Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors. (author)
Directory of Open Access Journals (Sweden)
Jonathan R Karr
2015-05-01
Full Text Available Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Narang, Pooja; Wilson Sayres, Melissa A
2016-12-31
Male mutation bias, when more mutations are passed on via the male germline than via the female germline, is observed across mammals. One common way to infer the magnitude of male mutation bias, α, is to compare levels of neutral sequence divergence between genomic regions that spend different amounts of time in the male and female germline. For great apes, including human, we show that estimates of divergence are reduced in putatively unconstrained regions near genes relative to unconstrained regions far from genes. Divergence increases with increasing distance from genes on both the X chromosome and autosomes, but increases faster on the X chromosome than autosomes. As a result, ratios of X/A divergence increase with increasing distance from genes and corresponding estimates of male mutation bias are significantly higher in intergenic regions near genes versus far from genes. Future studies in other species will need to carefully consider the effect that genomic location will have on estimates of male mutation bias.
Global parameter estimation of the Cochlodinium polykrikoides model using bioassay data
Institute of Scientific and Technical Information of China (English)
CHO Hong-Yeon; PARK Kwang-Soon; KIM Sung
2016-01-01
Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide (a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of theC. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of theC. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using thein-situ monitoring data is highly required. As opposed to the bioassay
Starrfelt, Jostein; Liow, Lee Hsiang
2016-04-01
The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.
Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization
Nitta, Naotaka; Takeda, Naoto
2008-05-01
The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.
Multipath Parameter Estimation from OFDM Signals in Mobile Channels
Letzepis, Nick; Haley, David
2010-01-01
We study multipath parameter estimation from orthogonal frequency division multiplex signals transmitted over doubly dispersive mobile radio channels. We are interested in cases where the transmission is long enough to suffer time selectivity, but short enough such that the time variation can be accurately modeled as depending only on per-tap linear phase variations due to Doppler effects. We therefore concentrate on the estimation of the complex gain, delay and Doppler offset of each tap of the multipath channel impulse response. We show that the frequency domain channel coefficients for an entire packet can be expressed as the superimposition of two-dimensional complex sinusoids. The maximum likelihood estimate requires solution of a multidimensional non-linear least squares problem, which is computationally infeasible in practice. We therefore propose a low complexity suboptimal solution based on iterative successive and parallel cancellation. First, initial delay/Doppler estimates are obtained via success...
Anisotropic parameter estimation using velocity variation with offset analysis
Energy Technology Data Exchange (ETDEWEB)
Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A. [Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesa 10, Bandung, 40132 (Indonesia)
2013-09-09
Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.
Cosmological parameter estimation using Particle Swarm Optimization (PSO)
Prasad, Jayanti
2011-01-01
Obtaining the set of cosmological parameters consistent with observational data is an important exercise in current cosmological research. It involves finding the global maximum of the likelihood function in the multi-dimensional parameter space. Currently sampling based methods, which are in general stochastic in nature, like Markov-Chain Monte Carlo(MCMC), are being commonly used for parameter estimation. The beauty of stochastic methods is that the computational cost grows, at the most, linearly in place of exponentially (as in grid based approaches) with the dimensionality of the search space. MCMC methods sample the full joint probability distribution (posterior) from which one and two dimensional probability distributions, best fit (average) values of parameters and then error bars can be computed. In the present work we demonstrate the application of another stochastic method, named Particle Swarm Optimization (PSO), that is widely used in the field of engineering and artificial intelligence, for cosmo...
Ehlert, Frederick J; Stein, Richard S L
We describe a method for estimating the affinities of ligands for active and inactive states of a G protein-coupled receptor (GPCR). Our protocol involves measuring agonist-induced signaling responses of a wild type GPCR and a constitutively active mutant of it under control conditions and after partial receptor inactivation or reduced receptor expression. Our subsequent analysis is based on the assumption that the activating mutation increases receptor isomerization into the active state without affecting the affinities of ligands for receptor states. A means of confirming this assumption is provided. Global nonlinear regression analysis yields estimates of 1) the active (Kact) and inactive (Kinact) receptor-state affinity constants, 2) the isomerization constant of the unoccupied receptor (Kq-obs), and 3) the sensitivity constant of the signaling pathway (KE-obs). The latter two parameters define the output response of the receptor, and hence, their ratio (Kq-obs/KE) is a useful measure of system bias. If the cellular system is reasonably stable and the Kq-obs and KE-obs values of the signaling pathway are known, the Kact and Kinact values of additional agonists can be estimated in subsequent experiments on cells expressing the wild type receptor. We validated our method through computer simulation, an analytical proof, and analysis of previously published data. Our approach provides 1) a more meaningful analysis of structure-activity relationships, 2) a means of validating in silico docking experiments on active and inactive receptor structures and 3) an absolute, in contrast to relative, measure of agonist bias.
Parameter estimation method for blurred cell images from fluorescence microscope
He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin
2016-10-01
Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.
Estimation of common cause failure parameters with periodic tests
Energy Technology Data Exchange (ETDEWEB)
Barros, Anne [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France)], E-mail: anne.barros@utt.fr; Grall, Antoine [Institut Charles Delaunay - Universite de technologie de Troyes - FRE CNRS 2848, 12, rue Marie Curie - BP 2060 -10010 Troyes cedex (France); Vasseur, Dominique [Electricite de France, EDF R and D - Industrial Risk Management Department 1, av. du General de Gaulle- 92141 Clamart (France)
2009-04-15
In the specific case of safety systems, CCF parameters estimators for standby components depend on the periodic test schemes. Classically, the testing schemes are either staggered (alternation of tests on redundant components) or non-staggered (all components are tested at the same time). In reality, periodic tests schemes performed on safety components are more complex and combine staggered tests, when the plant is in operation, to non-staggered tests during maintenance and refueling outage periods of the installation. Moreover, the CCF parameters estimators described in the US literature are derived in a consistent way with US Technical Specifications constraints that do not apply on the French Nuclear Power Plants for staggered tests on standby components. Given these issues, the evaluation of CCF parameters from the operating feedback data available within EDF implies the development of methodologies that integrate the testing schemes specificities. This paper aims to formally propose a solution for the estimation of CCF parameters given two distinct difficulties respectively related to a mixed testing scheme and to the consistency with EDF's specific practices inducing systematic non-simultaneity of the observed failures in a staggered testing scheme.
Experimental design for parameter estimation of gene regulatory networks.
Directory of Open Access Journals (Sweden)
Bernhard Steiert
Full Text Available Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM. This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Parameter estimation for stiff equations of biosystems using radial basis function networks
Directory of Open Access Journals (Sweden)
Sugimoto Masahiro
2006-04-01
Full Text Available Abstract Background The modeling of dynamic systems requires estimating kinetic parameters from experimentally measured time-courses. Conventional global optimization methods used for parameter estimation, e.g. genetic algorithms (GA, consume enormous computational time because they require iterative numerical integrations for differential equations. When the target model is stiff, the computational time for reaching a solution increases further. Results In an attempt to solve this problem, we explored a learning technique that uses radial basis function networks (RBFN to achieve a parameter estimation for biochemical models. RBFN reduce the number of numerical integrations by replacing derivatives with slopes derived from the distribution of searching points. To introduce a slight search bias, we implemented additional data selection using a GA that searches data-sparse areas at low computational cost. In addition, we adopted logarithmic transformation that smoothes the fitness surface to obtain a solution simply. We conducted numerical experiments to validate our methods and compared the results with those obtained by GA. We found that the calculation time decreased by more than 50% and the convergence rate increased from 60% to 90%. Conclusion In this work, our RBFN technique was effective for parameter optimization of stiff biochemical models.
Inter-system biases estimation in multi-GNSS relative positioning with GPS and Galileo
Deprez, Cecile; Warnant, Rene
2016-04-01
The recent increase in the number of Global Navigation Satellite Systems (GNSS) opens new perspectives in the field of high precision positioning. Particularly, the European Galileo program has experienced major progress in 2015 with the launch of 6 satellites belonging to the new Full Operational Capability (FOC) generation. Associated with the ongoing GPS modernization, many more frequencies and satellites are now available. Therefore, multi-GNSS relative positioning based on GPS and Galileo overlapping frequencies should entail better accuracy and reliability in position estimations. However, the differences between satellite systems induce inter-system biases (ISBs) inside the multi-GNSS equations of observation. Once these biases estimated and removed from the model, a solution involving a unique pivot satellite for the two considered constellations can be obtained. Such an approach implies that the addition of even one single Galileo satellite to the GPS-only model will strengthen it. The combined use of L1 and L5 from GPS with E1 and E5a from Galileo in zero baseline double differences (ZB DD) based on a unique pivot satellite is employed to resolve ISBs. This model removes all the satellite- and receiver-dependant error sources by differentiating and the zero baseline configuration allows atmospheric and multipath effects elimination. An analysis of the long-term stability of ISBs is conducted on various pairs of receivers over large time spans. The possible influence of temperature variations inside the receivers over ISB values is also investigated. Our study is based on the 5 multi-GNSS receivers (2 Septentrio PolaRx4, 1 Septentrio PolaRxS and 2 Trimble NetR9) installed on the roof of our building in Liege. The estimated ISBs are then used as corrections in the multi-GNSS observation model and the resulting accuracy of multi-GNSS positioning is compared to GPS and Galileo standalone solutions.
Terrain mechanical parameters online estimation for lunar rovers
Liu, Bing; Cui, Pingyuan; Ju, Hehua
2007-11-01
This paper presents a new method for terrain mechanical parameters estimation for a wheeled lunar rover. First, after deducing the detailed distribution expressions of normal stress and sheer stress at the wheel-terrain interface, the force/torque balance equations of the drive wheel for computing terrain mechanical parameters is derived through analyzing the rigid drive wheel of a lunar rover which moves with uniform speed in deformable terrain. Then a two-points Guass-Lengendre numerical integral method is used to simplify the balance equations, after simplifying and rearranging the resolve model are derived which are composed of three non-linear equations. Finally the iterative method of Newton and the steepest descent method are combined to solve the non-linear equations, and the outputs of on-board virtual sensors are used for computing terrain key mechanical parameters i.e. internal friction angle and press-sinkage parameters. Simulation results show correctness under high noises disturbance and effectiveness with low computational complexity, which allows a lunar rover for online terrain mechanical parameters estimation.
Iterative procedure for camera parameters estimation using extrinsic matrix decomposition
Goshin, Yegor V.; Fursov, Vladimir A.
2016-03-01
This paper addresses the problem of 3D scene reconstruction in cases when the extrinsic parameters (rotation and translation) of the camera are unknown. This problem is both important and urgent because the accuracy of the camera parameters significantly influences the resulting 3D model. A common approach is to determine the fundamental matrix from corresponding points on two views of a scene and then to use singular value decomposition for camera projection matrix estimation. However, this common approach is very sensitive to fundamental matrix errors. In this paper we propose a novel approach in which camera parameters are determined directly from the equations of the projective transformation by using corresponding points on the views. The proposed decomposition allows us to use an iterative procedure for determining the parameters of the camera. This procedure is implemented in two steps: the translation determination and the rotation determination. The experimental results of the camera parameters estimation and 3D scene reconstruction demonstrate the reliability of the proposed approach.
Parameter Estimation as a Problem in Statistical Thermodynamics
Earle, Keith A.; Schneider, David J.
2011-01-01
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm. PMID:21927520
J-A Hysteresis Model Parameters Estimation using GA
Directory of Open Access Journals (Sweden)
Bogomir Zidaric
2005-01-01
Full Text Available This paper presents the Jiles and Atherton (J-A hysteresis model parameter estimation for soft magnetic composite (SMC material. The calculation of Jiles and Atherton hysteresis model parameters is based on experimental data and genetic algorithms (GA. Genetic algorithms operate in a given area of possible solutions. Finding the best solution of a problem in wide area of possible solutions is uncertain. A new approach in use of genetic algorithms is proposed to overcome this uncertainty. The basis of this approach is in genetic algorithm built in another genetic algorithm.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of drying parameters in rotary dryers using differential evolution
Energy Technology Data Exchange (ETDEWEB)
Lobato, F S; Jr, V Steffen; Barrozo, M A S; Arruda, E B, E-mail: vsteffen@mecanica.ufu.br, E-mail: masbarrozo@ufu.br
2008-11-01
Inverse problems arise from the necessity of obtaining parameters of theoretical models to simulate the behavior of the system for different operating conditions. Several heuristics that mimic different phenomena found in nature have been proposed for the solution of this kind of problem. In this work, the Differential Evolution Technique is used for the estimation of drying parameters in realistic rotary dryers, which is formulated as an optimization problem by using experimental data. Test case results demonstrate both the feasibility and the effectiveness of the proposed methodology.
Directory of Open Access Journals (Sweden)
Ingo W Nader
Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.
Propagation channel characterization, parameter estimation, and modeling for wireless communications
Yin, Xuefeng
2016-01-01
Thoroughly covering channel characteristics and parameters, this book provides the knowledge needed to design various wireless systems, such as cellular communication systems, RFID and ad hoc wireless communication systems. It gives a detailed introduction to aspects of channels before presenting the novel estimation and modelling techniques which can be used to achieve accurate models. To systematically guide readers through the topic, the book is organised in three distinct parts. The first part covers the fundamentals of the characterization of propagation channels, including the conventional single-input single-output (SISO) propagation channel characterization as well as its extension to multiple-input multiple-output (MIMO) cases. Part two focuses on channel measurements and channel data post-processing. Wideband channel measurements are introduced, including the equipment, technology and advantages and disadvantages of different data acquisition schemes. The channel parameter estimation methods are ...
Improving gravitational-wave parameter estimation using Gaussian process regression
Moore, Christopher J; Chua, Alvin J K; Gair, Jonathan R
2015-01-01
Folding uncertainty in theoretical models into Bayesian parameter estimation is necessary in order to make reliable inferences. A general means of achieving this is by marginalising over model uncertainty using a prior distribution constructed using Gaussian process regression (GPR). Here, we apply this technique to (simulated) gravitational-wave signals from binary black holes that could be observed using advanced-era gravitational-wave detectors. Unless properly accounted for, uncertainty in the gravitational-wave templates could be the dominant source of error in studies of these systems. We explain our approach in detail and provide proofs of various features of the method, including the limiting behaviour for high signal-to-noise, where systematic model uncertainties dominate over noise errors. We find that the marginalised likelihood constructed via GPR offers a significant improvement in parameter estimation over the standard, uncorrected likelihood. We also examine the dependence of the method on the ...
Bayesian parameter estimation for chiral effective field theory
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Real-Time Parameter Estimation Using Output Error
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
PARAMETER ESTIMATION OF THE HYBRID CENSORED LOMAX DISTRIBUTION
Directory of Open Access Journals (Sweden)
Samir Kamel Ashour
2010-12-01
Full Text Available Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. The two most common censoring schemes used in life testing experiments are Type-I and Type-II censoring schemes. Hybrid censoring scheme is mixture of Type-I and Type-II censoring scheme. In this paper we consider the estimation of parameters of Lomax distribution based on hybrid censored data. The parameters are estimated by the maximum likelihood and Bayesian methods. The Fisher information matrix has been obtained and it can be used for constructing asymptotic confidence intervals.
Probabilistic estimation of the constitutive parameters of polymers
Directory of Open Access Journals (Sweden)
Siviour C.R.
2012-08-01
Full Text Available The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.
CosmoSIS: A System for MC Parameter Estimation
Energy Technology Data Exchange (ETDEWEB)
Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Xi, Zhenxiang; Liu, Liang; Davis, Charles C
2015-11-01
The development and application of coalescent methods are undergoing rapid changes. One little explored area that bears on the application of gene-tree-based coalescent methods to species tree estimation is gene informativeness. Here, we investigate the accuracy of these coalescent methods when genes have minimal phylogenetic information, including the implementation of the multilocus bootstrap approach. Using simulated DNA sequences, we demonstrate that genes with minimal phylogenetic information can produce unreliable gene trees (i.e., high error in gene tree estimation), which may in turn reduce the accuracy of species tree estimation using gene-tree-based coalescent methods. We demonstrate that this problem can be alleviated by sampling more genes, as is commonly done in large-scale phylogenomic analyses. This applies even when these genes are minimally informative. If gene tree estimation is biased, however, gene-tree-based coalescent analyses will produce inconsistent results, which cannot be remedied by increasing the number of genes. In this case, it is not the gene-tree-based coalescent methods that are flawed, but rather the input data (i.e., estimated gene trees). Along these lines, the commonly used program PhyML has a tendency to infer one particular bifurcating topology even though it is best represented as a polytomy. We additionally corroborate these findings by analyzing the 183-locus mammal data set assembled by McCormack et al. (2012) using ultra-conserved elements (UCEs) and flanking DNA. Lastly, we demonstrate that when employing the multilocus bootstrap approach on this 183-locus data set, there is no strong conflict between species trees estimated from concatenation and gene-tree-based coalescent analyses, as has been previously suggested by Gatesy and Springer (2014).
Malakar, Nabin K.; Lary, D. L.; Moore, A.; Gencaga, D.; Roscoe, B.; Albayrak, Arif; Petrenko, Maksym; Wei, Jennifer
2012-01-01
Air quality information is increasingly becoming a public health concern, since some of the aerosol particles pose harmful effects to peoples health. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. The comparison between the AOD measured from the ground-based Aerosol Robotic Network (AERONET) system and the satellite MODIS instruments at 550 nm shows that there is a bias between the two data products. We performed a comprehensive analysis exploring possible factors which may be contributing to the inter-instrumental bias between MODIS and AERONET. The analysis used several measured variables, including the MODIS AOD, as input in order to train a neural network in regression mode to predict the AERONET AOD values. This not only allowed us to obtain an estimate, but also allowed us to infer the optimal sets of variables that played an important role in the prediction. In addition, we applied machine learning to infer the global abundance of ground level PM2.5 from the AOD data and other ancillary satellite and meteorology products. This research is part of our goal to provide air quality information, which can also be useful for global epidemiology studies.
Estimating stellar atmospheric parameters based on Lasso features
Liu, Chuan-Xing; Zhang, Pei-Ai; Lu, Yu
2014-04-01
With the rapid development of large scale sky surveys like the Sloan Digital Sky Survey (SDSS), GAIA and LAMOST (Guoshoujing telescope), stellar spectra can be obtained on an ever-increasing scale. Therefore, it is necessary to estimate stellar atmospheric parameters such as Teff, log g and [Fe/H] automatically to achieve the scientific goals and make full use of the potential value of these observations. Feature selection plays a key role in the automatic measurement of atmospheric parameters. We propose to use the least absolute shrinkage selection operator (Lasso) algorithm to select features from stellar spectra. Feature selection can reduce redundancy in spectra, alleviate the influence of noise, improve calculation speed and enhance the robustness of the estimation system. Based on the extracted features, stellar atmospheric parameters are estimated by the support vector regression model. Three typical schemes are evaluated on spectral data from both the ELODIE library and SDSS. Experimental results show the potential performance to a certain degree. In addition, results show that our method is stable when applied to different spectra.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Directory of Open Access Journals (Sweden)
Akatsuki eKimura
2015-03-01
Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Estimation of multiexponential fluorescence decay parameters using compressive sensing.
Yang, Sejung; Lee, Joohyun; Lee, Youmin; Lee, Minyung; Lee, Byung-Uk
2015-09-01
Fluorescence lifetime imaging microscopy (FLIM) is a microscopic imaging technique to present an image of fluorophore lifetimes. It circumvents the problems of typical imaging methods such as intensity attenuation from depth since a lifetime is independent of the excitation intensity or fluorophore concentration. The lifetime is estimated from the time sequence of photon counts observed with signal-dependent noise, which has a Poisson distribution. Conventional methods usually estimate single or biexponential decay parameters. However, a lifetime component has a distribution or width, because the lifetime depends on macromolecular conformation or inhomogeneity. We present a novel algorithm based on a sparse representation which can estimate the distribution of lifetime. We verify the enhanced performance through simulations and experiments.
Learn-As-You-Go Acceleration of Cosmological Parameter Estimates
Aslanyan, Grigor; Price, Layne C
2015-01-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of $\\Lambda$CDM posterior probabilities. The computation is significantly accelerated wit...
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Parameter estimation in a spatial unit root autoregressive model
Baran, Sándor
2011-01-01
Spatial autoregressive model $X_{k,\\ell}=\\alpha X_{k-1,\\ell}+\\beta X_{k,\\ell-1}+\\gamma X_{k-1,\\ell-1}+\\epsilon_{k,\\ell}$ is investigated in the unit root case, that is when the parameters are on the boundary of the domain of stability that forms a tetrahedron with vertices $(1,1,-1), \\ (1,-1,1),\\ (-1,1,1)$ and $(-1,-1,-1)$. It is shown that the limiting distribution of the least squares estimator of the parameters is normal and the rate of convergence is $n$ when the parameters are in the faces or on the edges of the tetrahedron, while on the vertices the rate is $n^{3/2}$.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exp...
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Directory of Open Access Journals (Sweden)
Banga Julio R
2006-11-01
Full Text Available Abstract Background We consider the problem of parameter estimation (model calibration in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector. In order to surmount these difficulties, global optimization (GO methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown structure (i.e. black-box models. In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously
Quantifying lost information due to covariance matrix estimation in parameter inference
Sellentin, Elena
2016-01-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing a Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit. We apply our results to DES Science Verification weak lensing data, detecting a 10% loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1%, with an additional unc...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors
Directory of Open Access Journals (Sweden)
Manoela Ojeda
2014-01-01
Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.
PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION
Directory of Open Access Journals (Sweden)
S. Kalaivani
2012-07-01
Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.
Matched-filtering and parameter estimation of ringdown waveforms
Berti, Emanuele; Cardoso, Vitor; Cavaglia, Marco
2007-01-01
Using recent results from numerical relativity simulations of non-spinning binary black hole mergers we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to about 1000 solar masses out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (> 10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than 10^6 templates would be needed for a single-stage multi-mode search. Therefore, we recommend a "two stage" search to save on computational costs: single-mode templates can be used for detection, but multi-mode templates or Prony methods should be use...
Parameter Estimation of Induction Motors Using Water Cycle Optimization
Directory of Open Access Journals (Sweden)
M. Yazdani-Asrami
2013-12-01
Full Text Available This paper presents the application of recently introduced water cycle algorithm (WCA to optimize the parameters of exact and approximate induction motor from the nameplate data. Considering that induction motors are widely used in industrial applications, these parameters have a significant effect on the accuracy and efficiency of the motors and, ultimately, the overall system performance. Therefore, it is essential to develop algorithms for the parameter estimation of the induction motor. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams ﬂow to the sea in the real world. The objective function is defined as the minimization of the real values of the relative error between the measured and estimated torques of the machine in different slip points. The proposed WCA approach has been applied on two different sample motors. Results of the proposed method have been compared with other previously applied Meta heuristic methods on the problem, which show the feasibility and the fast convergence of the proposed approach.
Kendall, W L; Pollock, K H; Brownie, C
1995-03-01
The Jolly-Seber method has been the traditional approach to the estimation of demographic parameters in long-term capture-recapture studies of wildlife and fish species. This method involves restrictive assumptions about capture probabilities that can lead to biased estimates, especially of population size and recruitment. Pollock (1982, Journal of Wildlife Management 46, 752-757) proposed a sampling scheme in which a series of closely spaced samples were separated by longer intervals such as a year. For this "robust design," Pollock suggested a flexible ad hoc approach that combines the Jolly-Seber estimators with closed population estimators, to reduce bias caused by unequal catchability, and to provide estimates for parameters that are unidentifiable by the Jolly-Seber method alone. In this paper we provide a formal modelling framework for analysis of data obtained using the robust design. We develop likelihood functions for the complete data structure under a variety of models and examine the relationship among the models. We compute maximum likelihood estimates for the parameters by applying a conditional argument, and compare their performance against those of ad hoc and Jolly-Seber approaches using simulation.
Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation
2015-01-01
This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values dep...
Estimating parameters in stochastic systems: A variational Bayesian approach
Vrettas, Michail D.; Cornford, Dan; Opper, Manfred
2011-11-01
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Optimization-based particle filter for state and parameter estimation
Institute of Scientific and Technical Information of China (English)
Li Fu; Qi Fei; Shi Guangming; Zhang Li
2009-01-01
In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.
Area-to-point parameter estimation with geographically weighted regression
Murakami, Daisuke; Tsutsumi, Morito
2015-07-01
The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.
Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.
Estimating seismic demand parameters using the endurance time method
Institute of Scientific and Technical Information of China (English)
Ramin MADARSHAHIAN; Homayoon ESTEKANCHI; Akbar MAHVASHMOHAMMADI
2011-01-01
The endurance time (ET) method is a time history based dynamic analysis in which structures are subjected to gradually intensifying excitations and their performances are judged based on their responses at various excitation levels.Using this method,the computational effort required for estimating probable seismic demand parameters can be reduced by an order of magnitude.Calculation of the maximum displacement or target displacement is a basic requirement for estimating performance based on structural design.The purpose of this paper is to compare the results of the nonlinear ET method with the nonlinear static pushover (NSP) method of FEMA 356 by evaluating performances and target displacements of steel frames.This study will lead to a deeper insight into the capabilities and limitations of the ET method.The results are further compared with those of the standard nonlinear response history analysis.We conclude that results from the ET analysis are in proper agreement with those from standard procedures.
Energy parameter estimation in solar powered wireless sensor networks
Mousa, Mustafa
2014-02-24
The operation of solar powered wireless sensor networks is associated with numerous challenges. One of the main challenges is the high variability of solar power input and battery capacity, due to factors such as weather, humidity, dust and temperature. In this article, we propose a set of tools that can be implemented onboard high power wireless sensor networks to estimate the battery condition and capacity as well as solar power availability. These parameters are very important to optimize sensing and communications operations and maximize the reliability of the complete system. Experimental results show that the performance of typical Lithium Ion batteries severely degrades outdoors in a matter of weeks or months, and that the availability of solar energy in an urban solar powered wireless sensor network is highly variable, which underlines the need for such power and energy estimation algorithms.
Lunar ~3He estimations and related parameters analyses
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
As a potential nuclear fuel, 3He element is significant for both the solution of impending human energy crisis and the conservation of natural environment. Lunar regolith contains abundant and easily extracted 3He. Based on the analyses of the impact factors of 3He abundance, here we have compared a few key assessment parameters and approaches used in lunar regolith 3He reserve estimation and some representative estimation results, and discussed the issues concerned in 3He abundance variation and 3He reserve estimation. Our studies suggest that in a range of at least meters deep, 3He abundance in lunar regolith is homogeneously distributed and generally does not depend on the depth; lunar regolith has long been in a saturation state of 3He trapped by minerals through chemical bonds, and the temperature fluctuation on the lunar surface exerts little influence on the lattice 3He abundance. In terms of above conclusions and the newest lunar regolith depth data from the microwave brightness temperature retrieval of the "ChangE-1" Lunar Microwave Sounder, a new 3He reserve estimation has been presented.
Acoustical estimation of parameters of porous road pavement
Valyaev, V. Yu.; Shanin, A. V.
2012-11-01
In the simplest case, porous road pavement of a known thickness is described by such parameters as porosity, tortuosity, and flow resistance. The problem of estimating these parameters is investigated in this paper. An acoustic signal reflected by the pavement is used for this. It is shown that this problem can be solved by an experiment conducted in the time domain (i.e., the pulse response of the media is recorded). The incident sound wave is thrown at a grazing angle to the surface between the pavement and the air to improve penetration into the porous medium. The procedure of computing of the pulse response using the Morse-Ingard model is described in detail.
Estimation of the reconstruction parameters for Atom Probe Tomography
Gault, Baptiste; Stephenson, Leigh T; Moody, Michael P; Muddle, Barry C; Ringer, Simon P
2015-01-01
The application of wide field-of-view detection systems to atom probe experiments emphasizes the importance of careful parameter selection in the tomographic reconstruction of the analysed volume, as the sensitivity to errors rises steeply with increases in analysis dimensions. In this paper, a self-consistent method is presented for the systematic determination of the main reconstruction parameters. In the proposed approach, the compression factor and the field factor are determined using geometrical projections from the desorption images. A 3D Fourier transform is then applied to a series of reconstructions and, comparing to the known material crystallography, the efficiency of the detector is estimated. The final results demonstrate a significant improvement in the accuracy of the reconstructed volumes.
Pedotransfer functions estimating soil hydraulic properties using different soil parameters
DEFF Research Database (Denmark)
Børgesen, Christen Duus; Iversen, Bo Vangsø; Jacobsen, Ole Hørbye;
2008-01-01
Estimates of soil hydraulic properties using pedotransfer functions (PTF) are useful in many studies such as hydrochemical modelling and soil mapping. The objective of this study was to calibrate and test parametric PTFs that predict soil water retention and unsaturated hydraulic conductivity...... parameters. The PTFs are based on neural networks and the Bootstrap method using different sets of predictors and predict the van Genuchten/Mualem parameters. A Danish soil data set (152 horizons) dominated by sandy and sandy loamy soils was used in the development of PTFs to predict the Mualem hydraulic...... of the hydraulic properties of the studied soils. We found that introducing measured water content as a predictor generally gave lower errors for water retention predictions and higher errors for conductivity predictions. The best of the developed PTFs for predicting hydraulic conductivity was tested against PTFs...
Error estimation and adaptivity for transport problems with uncertain parameters
Sahni, Onkar; Li, Jason; Oberai, Assad
2016-11-01
Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).
Synchronization and parameter estimations of an uncertain Rikitake system
Energy Technology Data Exchange (ETDEWEB)
Aguilar-Ibanez, Carlos, E-mail: caguilar@cic.ipn.m [CIC-IPN, Av. Juan de Dios Batiz s/n, Esq. Manuel Othon de M., Unidad Profesional Adolfo Lopez Mateos, Col. Nueva Industrial Vallejo, Del. Gustavo A. Madero, C.P. 07738, Mexico D.F. (Mexico); Martinez-Guerra, Rafael, E-mail: rguerra@ctrl.cinvestav.m [CINVESTAV-IPN, Departamento de Control Automatico, Av. Instituto Politecnico Nacional 2508, Col. San Pedro Zacatenco, Mexico, D. F., 07360 (Mexico); Aguilar-Lopez, Ricardo [CINVESTAV-IPN, Departamento de Biotecnologia y Bioingenieria (Mexico); Mata-Machuca, Juan L. [CINVESTAV-IPN, Departamento de Control Automatico, Av. Instituto Politecnico Nacional 2508, Col. San Pedro Zacatenco, Mexico, D. F., 07360 (Mexico)
2010-08-02
In this Letter we address the synchronization and parameter estimation of the uncertain Rikitake system, under the assumption the state is partially known. To this end we use the master/slave scheme in conjunction with the adaptive control technique. Our control approach consists of proposing a slave system which has to follow asymptotically the uncertain Rikitake system, refereed as the master system. The gains of the slave system are adjusted continually according to a convenient adaptation control law, until the measurable output errors converge to zero. The convergence analysis is carried out by using the Barbalat's Lemma. Under this context, uncertainty means that although the system structure is known, only a partial knowledge of the corresponding parameter values is available.
Singularity of Some Software Reliability Models and Parameter Estimation Method
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
According to the principle, “The failure data is the basis of software reliability analysis”, we built a software reliability expert system (SRES) by adopting the artificial intelligence technology. By reasoning out the conclusion from the fitting results of failure data of a software project, the SRES can recommend users “the most suitable model” as a software reliability measurement model. We believe that the SRES can overcome the inconsistency in applications of software reliability models well. We report investigation results of singularity and parameter estimation methods of experimental models in SRES.
Directory of Open Access Journals (Sweden)
S. M. Miller
2014-09-01
Full Text Available Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM–LETKF" to quantify two aspects of the transport errors: error variances (standard deviations and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation, depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological
Miller, S. M.; Fung, I.; Liu, J.; Hayek, M. N.; Andrews, A. E.
2014-09-01
Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM-LETKF" to quantify two aspects of the transport errors: error variances (standard deviations) and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation), depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological uncertainties
Estimation of multipath transmission parameters for quantitative ultrasound measurements of bone.
Dencks, Stefanie; Schmitz, Georg
2013-09-01
When applying quantitative ultrasound (QUS) measurements to bone for predicting osteoporotic fracture risk, the multipath transmission of sound waves frequently occurs. In the last 10 years, the interest in separating multipath QUS signals for their analysis awoke, and led to the introduction of several approaches. Here, we compare the performances of the two fastest algorithms proposed for QUS measurements of bone: the modified least-squares Prony method (MLSP), and the space alternating generalized expectation maximization algorithm (SAGE) applied in the frequency domain. In both approaches, the parameters of the transfer functions of the sound propagation paths are estimated. To provide an objective measure, we also analytically derive the Cramér-Rao lower bound of variances for any estimator and arbitrary transmit signals. In comparison with results of Monte Carlo simulations, this measure is used to evaluate both approaches regarding their accuracy and precision. Additionally, with simulations using typical QUS measurement settings, we illustrate the limitations of separating two superimposed waves for varying parameters with focus on their temporal separation. It is shown that for good SNRs around 100 dB, MLSP yields better results when two waves are very close. Additionally, the parameters of the smaller wave are more reliably estimated. If the SNR decreases, the parameter estimation with MLSP becomes biased and inefficient. Then, the robustness to noise of the SAGE clearly prevails. Because a clear influence of the interrelation between the wavelength of the ultrasound signals and their temporal separation is observable on the results, these findings can be transferred to QUS measurements at other sites. The choice of the suitable algorithm thus depends on the measurement conditions.
Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc
2006-05-01
The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83
Nelson, Jon P
2014-01-01
Precise estimates of price elasticities are important for alcohol tax policy. Using meta-analysis, this paper corrects average beer elasticities for heterogeneity, dependence, and publication selection bias. A sample of 191 estimates is obtained from 114 primary studies. Simple and weighted means are reported. Dependence is addressed by restricting number of estimates per study, author-restricted samples, and author-specific variables. Publication bias is addressed using funnel graph, trim-and-fill, and Egger's intercept model. Heterogeneity and selection bias are examined jointly in meta-regressions containing moderator variables for econometric methodology, primary data, and precision of estimates. Results for fixed- and random-effects regressions are reported. Country-specific effects and sample time periods are unimportant, but several methodology variables help explain the dispersion of estimates. In models that correct for selection bias and heterogeneity, the average beer price elasticity is about -0.20, which is less elastic by 50% compared to values commonly used in alcohol tax policy simulations.
Linear Estimation of Location and Scale Parameters Using Partial Maxima
Papadatos, Nickos
2010-01-01
Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.
Parameter estimation in space systems using recurrent neural networks
Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.
1991-01-01
The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.
Estimation of Secondary Meteorological Parameters Using Mining Data Techniques
Directory of Open Access Journals (Sweden)
Rosabel Zerquera Díaz
2010-10-01
Full Text Available This work develops a process of Knowledge Discovery in Databases (KDD at the Higher Polytechnic Institute José Antonio Echeverría for the group of Environmental Research in collaboration with the Center of Information Management and Energy Development (CUBAENERGÍA in order to obtain a data model to estimate the behavior of secondary weather parameters from surface data. It describes some aspects of Data Mining and its application in the meteorological environment, also selects and describes the CRISP-DM methodology and data analysis tool WEKA. Tasks used: attribute selection and regression, technique: neural network of multilayer perceptron type and algorithms: CfsSubsetEval, BestFirst and MultilayerPerceptron. Estimation models are obtained for secondary meteorological parameters: height of convective mixed layer, height of mechanical mixed layer and convective velocity scale, necessary for the study of patterns of dispersion of pollutants in Cujae's area. The results set a precedent for future research and for the continuity of this in its first stage.
Parameter estimation and hypothesis testing in linear models
Koch, Karl-Rudolf
1999-01-01
The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...
Shoemaker, David M.
Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)
Periodic orbits of hybrid systems and parameter estimation via AD.
Energy Technology Data Exchange (ETDEWEB)
Guckenheimer, John. (Cornell University); Phipps, Eric Todd; Casey, Richard (INRIA Sophia-Antipolis)
2004-07-01
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance
Bayesian Approach in Estimation of Scale Parameter of Nakagami Distribution
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-08-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Nakagami distribution is a flexible life time distribution that may offer a good fit to some failure data sets. It has applications in attenuation of wireless signals traversing multiple paths, deriving unit hydrographs in hydrology, medical imaging studies etc. In this research, we obtain Bayesian estimators of the scale parameter of Nakagami distribution. For the posterior distribution of this parameter, we consider Uniform, Inverse Exponential and Levy priors. The three loss functions taken up are Squared Error Loss function, Quadratic Loss Function and Precautionary Loss function. The performance of an estimator is assessed on the basis of its relative posterior risk. Monte Carlo Simulations are used to compare the performance of the estimators. It is discovered that the PLF produces the least posterior risk when uniform priors is used. SELF is the best when inverse exponential and Levy Priors are used. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}
On-line estimation of concentration parameters in fermentation processes
Institute of Scientific and Technical Information of China (English)
XIONG Zhi-hua; HUANG Guo-hong; SHAO Hui-he
2005-01-01
It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.
Sheiner, L B; Beal, S L
1980-12-01
Individual pharmacokinetic par parameters quantify the pharmacokinetics of an individual, while population pharmacokinetic parameters quantify population mean kinetics, interindividual variability, and residual intraindividual variability plus measurement error. Individual pharmacokinetics are estimated by fitting individual data to a pharmacokinetic model. Population pharmacokinetic parameters are estimated either by fitting all individual's data together as though there was no individual kinetic differences (the naive pooled data approach), or by fitting each individual's data separately, and then combining the individual parameter estimates (the two-stage approach). A third approach, NONMEM, takes a middle course between these, and avoids shortcomings of each of them. A data set consisting of 124 steady-state phenytoin concentration-dosage pairs from 49 patients, obtained in the routine course of their therapy, was analyzed by each method. The resulting population parameter estimates differ considerably (population mean Km, for example, is estimated as 1.57, 5.36, and 4.44 micrograms/ml by the naive pooled data, two-stage, and NONMEN approaches, respectively). Simulations of the data were analyzed to investigate these differences. The simulations indicate that the pooled data approach fails to estimate variabilities and produces imprecise estimates of mean kinetics. The two-stage approach produces good estimates of mean kinetics, but biased and imprecise estimates of interindividual variability. NONMEN produces accurate and precise estimates of all parameters, and also reasonable confidence intervals for them. This performance is exactly what is expected from theoretical considerations and provides empirical support for the use of NONMEM when estimating population pharmacokinetics from routine type patient data.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Parameter Estimation of Nonlinear Systems by Dynamic Cuckoo Search.
Liao, Qixiang; Zhou, Shudao; Shi, Hanqing; Shi, Weilai
2017-04-01
In order to address with the problem of the traditional or improved cuckoo search (CS) algorithm, we propose a dynamic adaptive cuckoo search with crossover operator (DACS-CO) algorithm. Normally, the parameters of the CS algorithm are kept constant or adapted by empirical equation that may result in decreasing the efficiency of the algorithm. In order to solve the problem, a feedback control scheme of algorithm parameters is adopted in cuckoo search; Rechenberg's 1/5 criterion, combined with a learning strategy, is used to evaluate the evolution process. In addition, there are no information exchanges between individuals for cuckoo search algorithm. To promote the search progress and overcome premature convergence, the multiple-point random crossover operator is merged into the CS algorithm to exchange information between individuals and improve the diversification and intensification of the population. The performance of the proposed hybrid algorithm is investigated through different nonlinear systems, with the numerical results demonstrating that the method can estimate parameters accurately and efficiently. Finally, we compare the results with the standard CS algorithm, orthogonal learning cuckoo search algorithm (OLCS), an adaptive and simulated annealing operation with the cuckoo search algorithm (ACS-SA), a genetic algorithm (GA), a particle swarm optimization algorithm (PSO), and a genetic simulated annealing algorithm (GA-SA). Our simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples.
Directory of Open Access Journals (Sweden)
Quentin Noirhomme
2014-01-01
Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Lillehammer, Marie; Odegård, Jørgen; Meuwissen, Theo H E
2009-03-19
The combination of a sire model and a random regression term describing genotype by environment interactions may lead to biased estimates of genetic variance components because of heterogeneous residual variance. In order to test different models, simulated data with genotype by environment interactions, and dairy cattle data assumed to contain such interactions, were analyzed. Two animal models were compared to four sire models. Models differed in their ability to handle heterogeneous variance from different sources. Including an individual effect with a (co)variance matrix restricted to three times the sire (co)variance matrix permitted the modeling of the additive genetic variance not covered by the sire effect. This made the ability of sire models to handle heterogeneous genetic variance approximately equivalent to that of animal models. When residual variance was heterogeneous, a different approach to account for the heterogeneity of variance was needed, for example when using dairy cattle data in order to prevent overestimation of genetic heterogeneity of variance. Including environmental classes can be used to account for heterogeneous residual variance.
Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven
2014-01-01
Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.
Rothstein, Jesse
2009-01-01
Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…
Leeuwenburgh, O.
2008-01-01
The assimilation of high-quality in situ data into ocean models is known to lead to imbalanced analyses and spurious circulations when the model dynamics or the forcing contain systematic errors. Use of a bias estimation and correction scheme has been shown to significantly improve the balance of th
Parameter Estimations for Signal Type Classification of Korean Disordered Voices
Directory of Open Access Journals (Sweden)
JiYeoun Lee
2015-12-01
Full Text Available Although many signal-typing studies have been published, they are primarily based on manual inspection and experts’ judgments of voice samples’ acoustic content. Software may be required to automatically and objectively classify pathological voices into the four signal types and to facilitate experts’ opinion formation by providing specific signal type determination criteria. This paper suggests the coefficient of normalized skewness variation (CSV, coefficient of normalized kurtosis variation (CKV, and bicoherence value (BV based on the linear predictive coding (LPC residual to categorize voice signals. Its objective is to improve the performances of acoustic parameters such as jitter, shimmer, and the signal-to-noise ratio (SNR in signal type classification. In this study, the classification and regression tree (CART was used to estimate the performances of the acoustic, CSV, CKV, and BV parameters by using the LPC residual. In the investigation of acoustic parameters such as jitter, shimmer, and the SNR, the optimal tree generated by jitter alone yielded an average accuracy of 78.6%. When the acoustic, CSV, CKV, and BV parameters together were used to generate the decision tree, the average accuracy was 82.1%. In this case, the optimal tree formed by jitter and the BV effectively discriminated between the signal types. To perform accurate acoustic pathological voice analysis, signal type quantification is of great interest. Automatic pathological voice classification can be an important objective tool as the signal type can be numerically measured. Future investigations will incorporate multiple pathological data in classification methods to improve their performance and implement more reliable detectors.
Estimation of the Alpha Factor Parameters Using the ICDE Database
Energy Technology Data Exchange (ETDEWEB)
Kang, Dae Il; Hwang, M. J.; Han, S. H
2007-04-15
Detailed common cause failure (CCF) analysis generally need for the data for CCF events of other nuclear power plants because the CCF events rarely occur. KAERI has been participated at the international common cause failure data exchange (ICDE) project to get the data for the CCF events. The operation office of the ICDE project sent the CCF event data for EDG to the KAERI at December 2006. As a pilot study, we performed the detailed CCF analysis of EDGs for Yonggwang Units 3 and 4 and Ulchin Units 3 and 4 using the ICDE database. There are two onsite EDGs for each NPP. When an offsite power and the two onsite EDGs are not available, one alternate AC (AAC) diesel generator (hereafter AAC) is provided. Two onsite EDGs and the AAC are manufactured by the same company, but they are designed differently. We estimated the Alpha Factor and the CCF probability for the cases where three EDGs were assumed to be identically designed, and for those were assumed to be not identically designed. For the cases where three EDGs were assumed to be identically designed, double CCF probabilities of Yonggwang Units 3/4 and Ulchin Units 3/4 for 'fails to start' were estimated as 2.20E-4 and 2.10E-4, respectively. Triple CCF probabilities of those were estimated as 2.39E-4 and 2.42E-4, respectively. As each NPP has no experience for 'fails to run', Yonggwang Units 3/4 and Ulchin Units 3/4 have the same CCF probability. The estimated double and triple CCF probabilities for 'fails to run' are 4.21E-4 and 4.61E-4, respectively. Quantification results show that the system unavailability for the cases where the three EDGs are identical is higher than that where the three EDGs are different. The estimated system unavailability of the former case was increased by 3.4% comparing with that of the latter. As a future study, a computerization work for the estimations of the CCF parameters will be performed.
Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation
Jardak, Seifallah
2014-04-01
Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location
Calbet, Xavier
2012-01-01
The availability of hyperspectral infrared remote sensing instruments, like AIRS and IASI, on board of Earth observing satellites opens the possibility of obtaining high vertical resolution atmospheric profiles. We present an objective and simple technique to derive the parameters used in the optimal estimation method that retrieve atmospheric states from the spectra. The retrievals obtained in this way are optimal in the sense of providing the best possible validation statistics obtained from the difference between retrievals and a chosen calibration/validation dataset of atmospheric states. This is demonstrated analytically. To illustrate this result several real world examples using IASI retrievals fine tuned to ECMWF analyses are shown. The analytical equations obtained give further insight into the various contributions to the biases and errors of the retrievals and the consequences of using other types of fine tuning. Retrievals using IASI show an error of 0.9 to 1.9 K in temperature and below 6.5 K in ...
GENETIC AND NON-GENETIC PARAMETER ESTIMATES OF DAIRY CATTLE IN ETHIOPIA: A REVIEW
Directory of Open Access Journals (Sweden)
A. TESFA
2014-07-01
Full Text Available Ethiopia is endowed with diverse ecosystems inhabited by an abundant diversity of animal, plant and microbial genetic resources due to the availability of diverse agro-ecology. The productivity of any species depends largely on their reproductive performance. Reproduction is an indicator of reproductive efficiency and the rate of genetic progress in both selection and crossbreeding programs. Reproductive performance does not usually refer to a single trait, but to a combination of many traits and is an indicator of reproductive efficiency and the rate of genetic progress. The main indicators of reproductive performance those are reported by many authors are age at first service, age at first calving, calving interval, days open and number of services per conception. The non-genetic factors like sex of calf, season, year, and parity had significant effect on reproductive performance traits. Knowledge on these factors and their influence on cattle performance are important in management and selection decisions. Development of breeding objectives and effective genetic improvement programs require knowledge of the genetic variation among economically important traits and accurate estimates of heritability, repeatability and genetic correlations of these traits. The estimates of genetic parameters are helpful in determining the method of selection to predict direct and correlated response to selection, choosing a breeding system to be adopted for future improvement as well as genetic gains. The reproductive performance of Ethiopian indigenous and exotic breeds producing in the country is low due to various environmental factors and absence of integrated record on the sector that leads a biased result and recommendations of the genetic parameter estimates. Selection and designing of breeding programs for improving the production and productivity of indigenous breed through keeping their native potentials should be based on the results obtained from
Directory of Open Access Journals (Sweden)
Shifei Yuan
2015-07-01
Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.
Walter, S D; Han, H; Briel, M; Guyatt, G H
2017-04-30
In this paper, we consider the potential bias in the estimated treatment effect obtained from clinical trials, the protocols of which include the possibility of interim analyses and an early termination of the study for reasons of futility. In particular, by considering the conditional power at an interim analysis, we derive analytic expressions for various parameters of interest: (i) the underestimation or overestimation of the treatment effect in studies that stop for futility; (ii) the impact of the interim analyses on the estimation of treatment effect in studies that are completed, i.e. that do not stop for futility; (iii) the overall estimation bias in the estimated treatment effect in a single study with such a stopping rule; and (iv) the probability of stopping at an interim analysis. We evaluate these general expressions numerically for typical trial scenarios. Results show that the parameters of interest depend on a number of factors, including the true underlying treatment effect, the difference that the trial is designed to detect, the study power, the number of planned interim analyses and what assumption is made about future data to be observed after an interim analysis. Because the probability of stopping early is small for many practical situations, the overall bias is often small, but a more serious issue is the potential for substantial underestimation of the treatment effect in studies that actually stop for futility. We also consider these ideas using data from an illustrative trial that did stop for futility at an interim analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Bias-correction in vector autoregressive models
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard
2014-01-01
We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...
Dynamic systems models new methods of parameter and state estimation
2016-01-01
This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...
Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures
DEFF Research Database (Denmark)
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2013-01-01
of nominally identical test subjects. However, the literature on modal testing of timber structures is rather limited and the applicability and robustness of dierent curve tting methods for modal analysis of such structures is not described in detail. The aim of this paper is to investigate the robustness....... The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...... of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex...
Optimal segmentation of pupillometric images for estimating pupil shape parameters.
De Santis, A; Iacoviello, D
2006-12-01
The problem of determining the pupil morphological parameters from pupillometric data is considered. These characteristics are of great interest for non-invasive early diagnosis of the central nervous system response to environmental stimuli of different nature, in subjects suffering some typical diseases such as diabetes, Alzheimer disease, schizophrenia, drug and alcohol addiction. Pupil geometrical features such as diameter, area, centroid coordinates, are estimated by a procedure based on an image segmentation algorithm. It exploits the level set formulation of the variational problem related to the segmentation. A discrete set up of this problem that admits a unique optimal solution is proposed: an arbitrary initial curve is evolved towards the optimal segmentation boundary by a difference equation; therefore no numerical approximation schemes are needed, as required in the equivalent continuum formulation usually adopted in the relevant literature.
Cosmological Parameter Estimation with Large Scale Structure Observations
Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.
MANOVA, LDA, and FA criteria in clusters parameter estimation
Directory of Open Access Journals (Sweden)
Stan Lipovetsky
2015-12-01
Full Text Available Multivariate analysis of variance (MANOVA and linear discriminant analysis (LDA apply such well-known criteria as the Wilks’ lambda, Lawley–Hotelling trace, and Pillai’s trace test for checking quality of the solutions. The current paper suggests using these criteria for building objectives for finding clusters parameters because optimizing such objectives corresponds to the best distinguishing between the clusters. Relation to Joreskog’s classification for factor analysis (FA techniques is also considered. The problem can be reduced to the multinomial parameterization, and solution can be found in a nonlinear optimization procedure which yields the estimates for the cluster centers and sizes. This approach for clustering works with data compressed into covariance matrix so can be especially useful for big data.
DriftLess™, an innovative method to estimate and compensate for the biases of inertial sensors
Ruizenaar, M.G.H.; Kemp, R.A.W.
2014-01-01
In this paper a method is presented that allows for bias compensation of low-cost MEMS inertial sensors. It is based on the use of two sets of inertial sensors and a rotation mechanism that physically rotates the sensors in an alternating fashion. After signal processing, the biases of both sets of
Parameter Estimation in Ultrasonic Measurements on Trabecular Bone
Marutyan, Karen R.; Anderson, Christian C.; Wear, Keith A.; Holland, Mark R.; Miller, James G.; Bretthorst, G. Larry
2007-11-01
Ultrasonic tissue characterization has shown promise for clinical diagnosis of diseased bone (e.g., osteoporosis) by establishing correlations between bone ultrasonic characteristics and the state of disease. Porous (trabecular) bone supports propagation of two compressional modes, a fast wave and a slow wave, each of which is characterized by an approximately linear-with-frequency attenuation coefficient and monotonically increasing with frequency phase velocity. Only a single wave, however, is generally apparent in the received signals. The ultrasonic parameters that govern propagation of this single wave appear to be causally inconsistent [1]. Specifically, the attenuation coefficient rises approximately linearly with frequency, but the phase velocity exhibits a decrease with frequency. These inconsistent results are obtained when the data are analyzed under the assumption that the received signal is composed of one wave. The inconsistency disappears if the data are analyzed under the assumption that the signal is composed of superposed fast and slow waves. In the current investigation, Bayesian probability theory is applied to estimate the ultrasonic characteristics underlying the propagation of the fast and slow wave from computer simulations. Our motivation is the assumption that identifying the intrinsic material properties of bone will provide more reliable estimates of bone quality and fracture risk than the apparent properties derived by analyzing the data using a one-mode model.
Energy Technology Data Exchange (ETDEWEB)
Egbert, G.D.
1991-12-31
Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the ``dead band`` (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.
Analysis of Wave Directional Spreading by Bayesian Parameter Estimation
Institute of Scientific and Technical Information of China (English)
钱桦; 莊士贤; 高家俊
2002-01-01
A spatial array of wave gauges installed on an observatoion platform has been designed and arranged to measure the lo-cal features of winter monsoon directional waves off Taishi coast of Taiwan. A new method, named the Bayesian ParameterEstimation Method( BPEM), is developed and adopted to determine the main direction and the directional spreading parame-ter of directional spectra. The BPEM could be considered as a regression analysis to find the maximum joint probability ofparameters, which best approximates the observed data from the Bayesian viewpoint. The result of the analysis of field wavedata demonstrates the highly dependency of the characteristics of normalized directional spreading on the wave age. The Mit-suyasu type empirical formula of directional spectnun is therefore modified to be representative of monsoon wave field. More-over, it is suggested that Smax could be expressed as a function of wave steepness. The values of Smax decrease with increas-ing steepness. Finally, a local directional spreading model, which is simple to be utilized in engineering practice, is prop-osed.
Estimation of genetic parameters for reproductive traits in Shall sheep.
Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein
2013-06-01
The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (PLSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep.
Zhang, Yonggen; Schaap, Marcel G.
2017-04-01
Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like ;shift; parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Energy Technology Data Exchange (ETDEWEB)
Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL; Frerichs, Joshua T [ORNL; Jagadamma, Sindhu [ORNL
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Institute of Scientific and Technical Information of China (English)
姜春阳; 邱彤; 赵劲松; 陈丙珍
2009-01-01
The detection and identification of gross errors, especially measurement bias, plays a vital role in data reconciliation for nonlinear dynamic systems. Although parameter estimation method has been proved to be a pow-erful tool for bias identification, without a reliable and efficient bias detection strategy, the method is limited in ef- ficiency and cannot be applied widely. In this paper, a new bias detection strategy is constructed to detect the pres-ence of measurement bias and its occurrence time. With the help of this strategy, the number of parameters to be es-timated is greatly reduced, and sequential detections and iterations arc also avoided. In addition, the number of de-cision variables of the optimization model is reduced, through which the influence of the parameters estimated is reduced. By incorporating the strategy into the parameter estimation model, a new methodology named IPF.BD (Improved Parameter Estimation method with Bias Detection strategy) is constructed. Simulation studies on a con-tinuous stirred tank reactor (CSTR) and the Tennessee Eastman (TE) problem show that IPEBD is efficient for eliminating random errors, measurement biases and outliers contained in dynamic process data.
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Quantiles, parametric-select density estimation, and bi-information parameter estimators
Parzen, E.
1982-01-01
A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.
Simon, Patrick
2016-01-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We employ general adaptive moments (GLAM) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. For GLAM, the ellipticity is identical to that of isophotes of elliptical images, and this ellipticity is always an unbiased estimator of reduced shear. Our theoretical framework reiterates that moment-based techniques are similar to a model-based approach in the sense that they fit an elliptical profile to the image to obtain weighted moments. As a result, moment-based estimates of ellipticities are prone to underfitting bias. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimized estimator for the pre-seeing GLAM ellipticity and its bias for noise-free images. To deal with images where pixel noise is prominent, we conside...
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Kanungo, D. P.; Sharma, Shaifaly; Pain, Anindya
2014-09-01
The shear strength parameters of soil (cohesion and angle of internal friction) are quite essential in solving many civil engineering problems. In order to determine these parameters, laboratory tests are used. The main objective of this work is to evaluate the potential of Artificial Neural Network (ANN) and Regression Tree (CART) techniques for the indirect estimation of these parameters. Four different models, considering different combinations of 6 inputs, such as gravel %, sand %, silt %, clay %, dry density, and plasticity index, were investigated to evaluate the degree of their effects on the prediction of shear parameters. A performance evaluation was carried out using Correlation Coefficient and Root Mean Squared Error measures. It was observed that for the prediction of friction angle, the performance of both the techniques is about the same. However, for the prediction of cohesion, the ANN technique performs better than the CART technique. It was further observed that the model considering all of the 6 input soil parameters is the most appropriate model for the prediction of shear parameters. Also, connection weight and bias analyses of the best neural network (i.e., 6/2/2) were attempted using Connection Weight, Garson, and proposed Weight-bias approaches to characterize the influence of input variables on shear strength parameters. It was observed that the Connection Weight Approach provides the best overall methodology for accurately quantifying variable importance, and should be favored over the other approaches examined in this study.
Genetic parameter estimation of reproductive traits of Litopenaeus vannamei
Tan, Jian; Kong, Jie; Cao, Baoxiang; Luo, Kun; Liu, Ning; Meng, Xianhong; Xu, Shengyu; Guo, Zhaojia; Chen, Guoliang; Luan, Sheng
2017-02-01
In this study, the heritability, repeatability, phenotypic correlation, and genetic correlation of the reproductive and growth traits of L. vannamei were investigated and estimated. Eight traits of 385 shrimps from forty-two families, including the number of eggs (EN), number of nauplii (NN), egg diameter (ED), spawning frequency (SF), spawning success (SS), female body weight (BW) and body length (BL) at insemination, and condition factor (K), were measured,. A total of 519 spawning records including multiple spawning and 91 no spawning records were collected. The genetic parameters were estimated using an animal model, a multinomial logit model (for SF), and a sire-dam and probit model (for SS). Because there were repeated records, permanent environmental effects were included in the models. The heritability estimates for BW, BL, EN, NN, ED, SF, SS, and K were 0.49 ± 0.14, 0.51 ± 0.14, 0.12 ± 0.08, 0, 0.01 ± 0.04, 0.06 ± 0.06, 0.18 ± 0.07, and 0.10 ± 0.06, respectively. The genetic correlation was 0.99 ± 0.01 between BW and BL, 0.90 ± 0.19 between BW and EN, 0.22 ± 0.97 between BW and ED, -0.77 ± 1.14 between EN and ED, and -0.27 ± 0.36 between BW and K. The heritability of EN estimated without a covariate was 0.12 ± 0.08, and the genetic correlation was 0.90 ± 0.19 between BW and EN, indicating that improving BW may be used in selection programs to genetically improve the reproductive output of L. vannamei during the breeding. For EN, the data were also analyzed using body weight as a covariate (EN-2). The heritability of EN-2 was 0.03 ± 0.05, indicating that it is difficult to improve the reproductive output by genetic improvement. Furthermore, excessive pursuit of this selection is often at the expense of growth speed. Therefore, the selection of high-performance spawners using BW and SS may be an important strategy to improve nauplii production.
[Base-rate estimates for negative response bias in a workers' compensation claim sample].
Merten, T; Krahi, G; Krahl, C; Freytag, H W
2010-09-01
Against the background of a growing interest in symptom validity assessment in European countries, new data on base rates of negative response bias is presented. A retrospective data analysis of forensic psychological evaluations was performed based on 398 patients with workers' compensation claims. 48 percent of all patients scored below cut-off in at least one symptom validity test (SVT) indicating possible negative response bias. However, different SVTs appear to have differing potential to identify negative response bias. The data point at the necessity to use modern methods to check data validity in civil forensic contexts.
Yu, Qiuli
2001-12-01
Aircraft flight test data are processed by optimal estimation programs to estimate the aircraft state trajectory (3 DOF) and to identify the unknown parameters, including constant biases and scale factor of the measurement instrumentation system. The methods applied in processing aircraft flight test data are the iterative extended Kalman filter/smoother and fixed-point smoother (IEKFSFPS) method and the two-step estimator (TSE) method. The models of an aircraft flight dynamic system and measurement instrumentation system are established. The principles of IEKFSFPS and TSE methods are derived and summarized, and their algorithms are programmed with MATLAB codes. Several numerical experiments of flight data processing and parameter identification are carried out by using IEKFSFPS and TSE algorithm programs. Comparison and discussion of the simulation results with the two methods are made. The TSE+IEKFSFPS combination method is presented and proven to be effective and practical. Figures and tables of the results are presented.
van de Boer, A; Moene, A F; Graf, A; Simmer, C; Holtslag, A A M
2014-09-10
Atmospheric scintillations cause difficulties for applications where an undistorted propagation of electromagnetic radiation is essential. These scintillations are related to turbulent fluctuations of temperature and humidity that are in turn related to surface heat fluxes. We developed an approach that quantifies these scintillations by estimating C(n(2)) from surface fluxes that are derived from single-level routine weather data. In contrast to previous methods that are biased to dry and warm air, our method is directly applicable to several land surface types, environmental conditions, wavelengths, and measurement heights (lookup tables for a limited number of site-specific parameters are provided). The approach allows for an efficient evaluation of the performance of, e.g., infrared imaging systems, laser geodetic systems, and ground-to-satellite optical communication systems. We tested our approach for two grass fields in central and southern Europe, and for a wheat field in central Europe. Although there are uncertainties in the flux estimates, the impact on C(n(2)) is shown to be rather small. The C(n(2)) daytime estimates agree well with values determined from eddy covariance measurements for the application to the three fields. However, some adjustments were needed for the approach for the grass in southern Europe because of non-negligible boundary-layer processes that occur in addition to surface-layer processes.
On a Class of Bias-Amplifying Variables that Endanger Effect Estimates
Pearl, Judea
2012-01-01
This note deals with a class of variables that, if conditioned on, tends to amplify confound- ing bias in the analysis of causal effects. This class, independently discovered by Bhat- tacharya and Vogt (2007) and Wooldridge (2009), includes instrumental variables and variables that have greater influence on treat- ment selection than on the outcome. We offer a simple derivation and an intuitive explana- tion of this phenomenon and then extend the analysis to non linear models. We show that: 1. the bias-amplifying potential of instru- mental variables extends over to non- linear models, though not as sweepingly as in linear models; 2. in non-linear models, conditioning on in- strumental variables may introduce new bias where none existed before; 3. in both linear and non-linear models, in- strumental variables have no effect on selection-induced bias.
Propagation of biases in humidity in the estimation of global irrigation water
Directory of Open Access Journals (Sweden)
Y. Masaki
2015-07-01
Although different GHMs have different sensitivities to atmospheric humidity because different types of potential evapotranspiration formulae are implemented in them, bias correction of the humidity should be applied to forcing data, particularly for the evaluation of evapotranspiration and irrigation water.
Clinical refinement of the automatic lung parameter estimator (ALPE).
Thomsen, Lars P; Karbing, Dan S; Smith, Bram W; Murley, David; Weinreich, Ulla M; Kjærgaard, Søren; Toft, Egon; Thorgaard, Per; Andreassen, Steen; Rees, Stephen E
2013-06-01
The automatic lung parameter estimator (ALPE) method was developed in 2002 for bedside estimation of pulmonary gas exchange using step changes in inspired oxygen fraction (FIO₂). Since then a number of studies have been conducted indicating the potential for clinical application and necessitating systems evolution to match clinical application. This paper describes and evaluates the evolution of the ALPE method from a research implementation (ALPE1) to two commercial implementations (ALPE2 and ALPE3). A need for dedicated implementations of the ALPE method was identified: one for spontaneously breathing (non-mechanically ventilated) patients (ALPE2) and one for mechanically ventilated patients (ALPE3). For these two implementations, design issues relating to usability and automation are described including the mixing of gasses to achieve FIO₂ levels, and the automatic selection of FIO₂. For ALPE2, these improvements are evaluated against patients studied using the system. The major result is the evolution of the ALPE method into two dedicated implementations, namely ALPE2 and ALPE3. For ALPE2, the usability and automation of FIO₂ selection has been evaluated in spontaneously breathing patients showing that variability of gas delivery is 0.3 % (standard deviation) in 1,332 breaths from 20 patients. Also for ALPE2, the automated FIO2 selection method was successfully applied in 287 patient cases, taking 7.2 ± 2.4 min and was shown to be safe with only one patient having SpO₂ < 86 % when the clinician disabled the alarms. The ALPE method has evolved into two practical, usable systems targeted at clinical application, namely ALPE2 for spontaneously breathing patients and ALPE3 for mechanically ventilated patients. These systems may promote the exploration of the use of more detailed descriptions of pulmonary gas exchange in clinical practice.
Institute of Scientific and Technical Information of China (English)
Ding Zhenfeng; Sun Jingchao; Wang Younian
2005-01-01
The tuned substrate self-bias in an rf inductively coupled plasma source is controlled by means of varying the impedance of an external LC network inserted between the substrate and the ground. The influencing parameters such as the substrate axial position, different coupling coils and inserted resistance are experimentally studied. To get a better understanding of the experimental results, the axial distributions of the plasma density, electron temperature and plasma potential are measured with an rf compensated Langmuir probe; the coil rf peak-to-peak voltage is measured with a high voltage probe. As in the case of changing discharge power, it is found that continuity, instability and bi-stability of the tuned substrate bias can be obtained by means of changing the substrate axial position in the plasma source or the inserted resistance. Additionally,continuity can not transit directly into bi-stability, but evolves via instability. The inductance of the coupling coil has a substantial effect on the magnitude and the property of the tuned substrate bias.
Hubbard, Rebecca A; Miglioretti, Diana L
2013-03-01
False-positive test results are among the most common harms of screening tests and may lead to more invasive and expensive diagnostic testing procedures. Estimating the cumulative risk of a false-positive screening test result after repeat screening rounds is, therefore, important for evaluating potential screening regimens. Existing estimators of the cumulative false-positive risk are limited by strong assumptions about censoring mechanisms and parametric assumptions about variation in risk across screening rounds. To address these limitations, we propose a semiparametric censoring bias model for cumulative false-positive risk that allows for dependent censoring without specifying a fixed functional form for variation in risk across screening rounds. Simulation studies demonstrated that the censoring bias model performs similarly to existing models under independent censoring and can largely eliminate bias under dependent censoring. We used the existing and newly proposed models to estimate the cumulative false-positive risk and variation in risk as a function of baseline age and family history of breast cancer after 10 years of annual screening mammography using data from the Breast Cancer Surveillance Consortium. Ignoring potential dependent censoring in this context leads to underestimation of the cumulative risk of false-positive results. Models that provide accurate estimates under dependent censoring are critical for providing appropriate information for evaluating screening tests.
Howe, Chanelle J; Cole, Stephen R; Chmiel, Joan S; Muñoz, Alvaro
2011-03-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
Parameter estimation for the subcritical Heston model based on discrete time observations
2014-01-01
We study asymptotic properties of some (essentially conditional least squares) parameter estimators for the subcritical Heston model based on discrete time observations derived from conditional least squares estimators of some modified parameters.
Comparative study on parameter estimation methods for attenuation relationships
Sedaghati, Farhad; Pezeshk, Shahram
2016-12-01
In this paper, the performance and advantages and disadvantages of various regression methods to derive coefficients of an attenuation relationship have been investigated. A database containing 350 records out of 85 earthquakes with moment magnitudes of 5-7.6 and Joyner-Boore distances up to 100 km in Europe and the Middle East has been considered. The functional form proposed by Ambraseys et al (2005 Bull. Earthq. Eng. 3 1-53) is selected to compare chosen regression methods. Statistical tests reveal that although the estimated parameters are different for each method, the overall results are very similar. In essence, the weighted least squares method and one-stage maximum likelihood perform better than the other considered regression methods. Moreover, using a blind weighting matrix or a weighting matrix related to the number of records would not yield in improving the performance of the results. Further, to obtain the true standard deviation, the pure error analysis is necessary. Assuming that the correlation between different records of a specific earthquake exists, the one-stage maximum likelihood considering the true variance acquired by the pure error analysis is the most preferred method to compute the coefficients of a ground motion predication equation.
Modeling and parameter estimation for hydraulic system of excavator's arm
Institute of Scientific and Technical Information of China (English)
HE Qing-hua; HAO Peng; ZHANG Da-qing
2008-01-01
A retrofitted electro-bydraulic proportional system for hydraulic excavator was introduced firstly. According to the principle and characteristic of load independent flow distribution(LUDV)system, taking boom hydraulic system as an example and ignoring the leakage of hydraulic cylinder and the mass of oil in it,a force equilibrium equation and a continuous equation of hydraulic cylinder were set up.Based On the flow equation of electro-hydraulic proportional valve, the pressure passing through the valve and the difference of pressure were tested and analyzed.The results show that the difference of pressure does not change with load, and it approximates to 2.0 MPa. And then, assume the flow across the valve is directly proportional to spool displacement andis not influenced by load, a simplified model of electro-hydraulic system was put forward. At the same time, by analyzing the structure and load-bearing of boom instrument, and combining moment equivalent equation of manipulator with rotating law, the estimation methods and equations for such parameters as equivalent mass and bearing force of hydraulic cylinder were set up. Finally, the step response of flow of boom cylinder was tested when the electro-hydraulic proportional valve was controlled by the stepcurrent. Based on the experiment curve, the flow gain coefficient of valve is identified as 2.825×10-4m3/(s·A)and the model is verified.
Fetterly, Kenneth A.; Favazza, Christopher P.
2016-08-01
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ({{d}\\prime} ) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame-1 resulted in {{d}\\prime} estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of {{d}\\prime}Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Estimation of uranium migration parameters in sandstone aquifers.
Malov, A I
2016-03-01
The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are
Formulas for precisely and efficiently estimating the bias and variance of the length measurements
Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin
2016-10-01
Error analysis in length measurements is an important problem in geographic information system and cartographic operations. The distance between two random points—i.e., the length of a random line segment—may be viewed as a nonlinear mapping of the coordinates of the two points. In real-world applications, an unbiased length statistic may be expected in high-precision contexts, but the variance of the unbiased statistic is of concern in assessing the quality. This paper suggesting the use of a k-order bias correction formula and a nonlinear error propagation approach to the distance equation provides a useful way to describe the length of a line. The study shows that the bias is determined by the relative precision of the random line segment, and that the use of the higher-order bias correction is only needed for short-distance applications.
On Parameters Estimation of Lomax Distribution under General Progressive Censoring
Directory of Open Access Journals (Sweden)
Bander Al-Zahrani
2013-01-01
Full Text Available We consider the estimation problem of the probability S=P(Y
Comparison of Estimation Techniques for the Four Parameter Beta Distribution.
1981-12-01
estimators. Mendenhall and Scheaffer define an estimator as ##a rule that tells us how to calculate an estimate based on the measurements contained...Dynamics Laboratory, October 1976. 19. Mendenhall, William and Richard L. Scheaffer . Mathema- tical Statistics with ApflAlicions. North Scituate
Variational methods to estimate terrestrial ecosystem model parameters
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Automated Modal Parameter Estimation of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, Palle; Brincker, Rune; Goursat, Maurice
In this paper the problems of doing automatic modal parameter extraction of ambient excited civil engineering structures is considered. Two different approaches for obtaining the modal parameters automatically are presented: The Frequency Domain Decomposition (FDD) technique and a correlation...
Directory of Open Access Journals (Sweden)
Shuguo Pan
2015-07-01
Full Text Available Satellite orbit error and clock bias are the keys to precise point positioning (PPP. The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS, a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul
2015-01-01
Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.